Archive

Posts Tagged ‘to’

Easy Formants for Renoise 2.5 Final

09/04/2013 Commentaires fermés

After a few more hours on it, I’ve improved the precision of the « aeiou » vowels and reduced a bit the required ressources.

Features

– optimized formants filters system
– high precision vowels (x2)
– a bit more CPU friendly (5% Faster)
– simplified keyboard based formants control
– improved file size (2K lighter)
– formants feedback control
– auto Q
– auto bot voice mode (robotizer)
– RExciter ™ powered 😉

DOWNLOAD HERE.

 

FAQ

How to install it in my own song ?

  • Right-click somewhere on the Track DSP chain.
  • Select Device Chain => Save As…
  • Select your target directory and give a name to the chain.
  • It must have the .XRNT file extention.
  • Then when you edit your song, right click somewhere on your Track DSP tab.
  • Select Device Chain => Load…
  • Find your .XRNT file, load it.
  • That’s all. Yep.

 

What input sounds / samples does it require to work well ?

  • SAW based samples (oblique waveform) with notes/tones within a range of 300 to 3400 Hz in the Renoise spectrum.

How to modify Formants ?

  • Formants are « Keyboard Controled ». Both the Qwerty and the MIDI keyboard can be used.
  • Use the C-0 to D#1 Keys to modify formants while the music is played.
  • You can modify them while you record notes, or play LIVE, with your QWERTY or MIDI keyboard.
  • Go into the Sample Keyzones.
  • Modify your Sample Layer so that it doesn’t use notes from C-0 to D#1.
  • By this way you’ll be able to both play your sample & control the filter in the same track.

How exactly did you placed formants on the low notes ?

  • Formants are composed of 1 cycle of ‘ieawyuw’ vowels fragmented in 16 different notes and subtle differences.

C-0 ========================> D#1
i – y – e – ø – ɛ – œ – a – ɶ – ɑ – ɒ – ʌ – ɔ – ɤ – o – ɯ – u

  • you’ve got to use your ears to know these vowels !

How to sound lika robot ?

  • Completely drag the Humanize slider to the left.

I don’t like the robot sound. How do I delete it ?

  • Drag a bit the Humanize slider to the right.
  • Or uncheck « Bot Voice »

What is the Feedback parameter ?

  • This is something that adds some personnality to the sound.
  • Warning, too high parameters will produce a glassy/metallic effect, that is stange, but however interesting.

What is the Auto Q parameter ?

  • The Auto Q parameter slightly increases the Q factor of bandpass filters when formants deal with higher frequencies. You can switch it off if you want to use the Bandwidth slider like a Wet/Dry signal slider.

 

My filtered sound is poor compared to yours.

  • It’s logical. Some harmonics have been removed by the chain. Put the *Exciter DSP and the end of the road, and raise the DSP sliders to revive the original sound.

Credits

___________________________________________________________
have fun with Renoise !
kurtz/lapiNIC

Easy Formants for Renoise 2.8 [English]

07/04/2013 Commentaires fermés

THIS TIME IT’ EASY ! ALL THE SYSTEM WORKS IN JUST ONE TRACK NOTHING ELSE.

* * *

Since I’ve deleted by mistake the Bit_Arts initial work that showed it was possible to build a native formant filter in Renoise, I had high hopes to rebuild another similar one from the start. I’ve read some interesting articles for that.

So for example, this demo is once again based on this S.o.S webpage :
http://www.soundonsound.com/sos/mar01/articles/synthsec.asp

Formants can be seen as a set of 3 parallel bandpass filters ; 5 voyels are emulated by defining specific bandpass frequencies in 3 mixed BP filters.

The input sound should be « pulse-modulated » / distorded (shift model) or (why not) Lofimatized (Bitcrushed) for better results.

The 3 Frequencies are defined in a 3×5 matrix :

  •      « ee » sounds like « leap » F1=270Hz F2=2300Hz F3=3000Hz
  •      « oo » sounds like « loop » F1=300 F2=870Hz F3=2250Hz
  •      « i » sounds like « lip » F1=400 F2=2000Hz F3=2550Hz
  •      « e » sounds like « let » F1=530 F2=1850Hz F3=2500Hz
  •      « u » sounds like « lug » F1=640 F2=1200Hz F3=2400Hz
  •      « a » sounds like « lap » F1=660 F2=1700Hz F3=2400Hz

If you define for any LFO an amplitude of 50% and an Offset of 42.30% then you can convert the previous values like that :

  • « ee » sounds like « leap » F1=0% F2=70.755% F3=81.509%
  • « oo » sounds like « loop » F1=2.222% F2=34.34% F3=70%
  • « i » sounds like « lip »  F1=9.811% F2=65.283% F3=70%
  • « e » sounds like « let » F1=18.113% F2=62.264% F3=74.151%
  • « u » sounds like « lug » F1=23.962% F2=46.038% F3=72.453%
  • « a » sounds like « lap » F2=25.094% F2=59.057% F3=72.45%

The problem was to « store » somewhere this 3×5 frequencies matrix. The solution came from Ragnar Aambø (TheBellows) that created a nice piece of random tune called « Invisible Melodies« . In this tune some notes are stored in custom LFOs with « points » and those custom LFOs points are triggered (one shot mode) with the modulated « Reset » button.

So following this example, Renoise can store the 15 specific frequencies (corr. to each vowel) in 3 custom LFOs and the frequency selection has to be made with a Hydra device pointing to the « Reset » of the LFOs.

Concerning the 3 filters, they can be found in the new Renoise « MultiTap » device that has « filtered » delay features. If you simply block delays, you can just use the filterings there without using routings or send tracks anymore (as show in previous techniques).

It makes the installation & setup of the Formants Filter Mechanism EASY to INSTALL and to TWEAK, you’ll just have to copypaste the chain contained in FMTS CTRL in any track you want, the track does all the job by itself.

Note that if you link the LFOs containing formant values, directly to the MultiTap Filters, then you’ll encounter a small problem of Robotized output, because transitions between formants are not sweet enough. That’s where kraken/gore comes 🙂 and brought us the « Inertial Slider » Secret *Formula Device Trick. If you check closely the chain you’ll see that I’ve inserted the Inertial Slider between the Filters and the Custom LFOs, so that transitions are more sweet and human. Since it sounds like the Intertia parameter found in the *Filter DSP device, I’ve renamed it Inertia directly but it’s a pure copypaste of kraken/gore’s work.

Concerning the values’ storage, in the LFOs, I’ve used the new « External Editor » to adjust precisely the point values : a PITA(!) but that hopefully worked.

DOWNLOAD IT HERE.

* * *

USAGE

At first, SAVE the chain contained in the FMTS CTRL track. Give it the name you want. The file extension must be .XRNT.

Then IF you need a Formant Filter in your own module, THEN LOAD this previously saved chain in the desired track. It can be a standard track or a SEND track, it works too.

The Formant control is EASY and SIMPLE.

MODULATE the 3 parameters there :
(1) Formants – self explicit.
(2) Bandwidth – increase/decrease formant filters
(3) Humanize – human/robotic vocal transitions

Modifying Formants With the dedicated slider:

0 % ======================================>> 100 %
« ee » =====> « oo » =====> « i » =====> « e » =====> « u » =====> « a »

BTW : you can AUTOMATE the Formants slider as you wish, with lfos, with automation envelopes, with a key-tracker, anything works.

Credits/References

  •  Bit_Arts – showed that doing it in renoise was possible – many thanx for removing your links, it truely pushed me to move my ass and understand the whole mechanism by myself
  •  Sound On Sound – the BIBLE of hi-tech music recording industry
  •  The BellowsInvisible Melodies
  •  kraken/goreInertial Sliders (Secret *Formula Device Powered) Trick

Final Notes :

Thanx for reading
have fun with Renoise !

kurtz/lapiNIC

Catégories :Renoise Étiquettes : , , , , , , , , ,

Mixing and Mastering with Renoise [English]

11/11/2011 Commentaires fermés

Mixing and Mastering with Renoise [English]

french version available here.

.

INTRODUCTION : GOOD IDEAS FIRST

First of all we should drop the idea to find a « universal method » to perform the right mix and the right master, that could systematically work all the time in every encountered situation with every listeners. Mixing is a question of knowledge about the tools you’re using and what you do is using your tools to produce some desired effects. We should say that what’s the most important to understand first is that the « mix » is NOT based on a « canon listening rule », and a « canon mixing framework », but on the way YOU use your tools, and YOU want the music to be felt by those listeners. And however even if you can’t exactly predict what these listeners want and like, you can be sure that producing a BIG HUGE sound won’t make instant miracles, especially if you don’t have a good musical idea from the start. If you don’t previously have a good musical idea, you can know & use the ultimate mixing pro tips and pro hardware to increase the volume, what you’ll finally get with a louder sound, is ear fatigue, and a louder evidence of your lack of good ideas.

Since music is played to be listened by a public (even if it’s a public made of 2 persons), this music can’t be conceived without the listeners in mind. Speaking about listeners, they are fragmented by the industry into defined markets and genres. But this definition is very commercial. And maybe the fragmentation has nothing to do with the real and true musical expectations.

For sure, it’s very hard for me to define what is a « good » musical idea, in the mind of masses of listeners (if I knew it I’ll be probably far more famous). But let’s supose that listeners expect variations within the style/genre they like, and that they need bonds to reach genres they can’t initially appreciate.

A ] My musical tips with Renoise

So, musical genres need to be renewed, executed differently, with new techniques or technologies, and also, they need to find bonds to reach other defined musical identities. The duty of every musician is then to play with the repetitions that reinforces the identity of a musical structure, and to regularly introduce the necessary variations that renews the interest of the previously defined structure. Repetitions and variations are introduced with a specific periodicity. The duty of a musicien is to expand his musical culture, to « know » other genres, so that evolutions and collaborations are constantly feeding his works.

And you’ll guess that the simpler way to fullfill the repetition needs of listeners is to use the famous copypaste functions within Renoise. BUT after 3 repetitions, the fourth one must introduce something new, a variation. That’s all. Inside the fourth repetition, I change a bit the melodic line, I introduce a small break, I modify a parameter, well I do anything but I don’t exactly introduce a pure copy of what comes before, what would be considered by me as a shameless sin, or the evidence of a lack of work.

I think that the most interesting thing that could happen to music (whoever makes it, professionnals from the industry or computer weekend muzakers like me) is a successfull crossover. Basically, my musical ideas come from the collision of previous different listening experiences in the same brain. I’m not exactly inspired by 1 thing, but I often ask myself if 2 or 3 radically different things could be « united » or « joined » in a same and coherent rythmic  and/or melodic structure. I often try to grasp the definition of a style, and then my motivation is first to play with it, and to find a way to add something very different in it : it can be an-usual instrument, or an unusual way to use or mix this instrument, an unusual rythm, something that belongs to another genre, things like that. I can also try to mix a genre with techniques inspired from another one and use uncommon effects.

Renoise helps me to test my ideas quickly. With Renoise, thanx to shortcuts, and thanx to the overall GUI efficiency, everythings is fast, quick done. I globally don’t exceed 6 hours to achieve a 2:30mn composition. I can even go faster if I have a precise vision of what I want.

B ] My sound mix tips & strategies with RENOISE

I won’t here comment deeper previous debates about the dB war that progressively pushes every people involved in the buisness to bump up the volume, and compress more and more the music so well that the resulting sound is constant, uniform, boring, and lacks of clarity and dynamic range. However we should admit that if you want your music to be noticed during any airplay, you should tru to find a way to get the highest possible sound level / presence / definition, but without any loss in terms of fidelity, dynamic range, and with the lowest possible side effects. Of course you’ll say : go and buy Ozone. But I truely believe that the Renoise internal native DSPs and internal visualisation tools when well mastered, can act like professionnal tools (without the required recording studio budget). I must find a good setup that would not rely on any external solution. So my renoise-based sound results in the combination of some « mixing methods » you probably allready know and that are directly usable inside the Renoise interface.

To sum up things, I’ve build more efficient renoise based mix setups when I dropped the idea to get a providential VST plugin that I’d have to put in the master track and that will magically make the « interesting » studio sound quality I expect.

I sculpt my instruments at first

Okay you’ve got a samplepack. But the samples are often dirty, untuned, « poppy » (they produce clicks or pops), unnormalised, what will produce later big problems when controling the volume with compressors. Sometimes, some samples are cool, but if they’ve no glitches inside, they are « flat » and have no presence. If you want to fix things you’ll probably have to use a 1 instrument = 1 track philosophy.

* glitches & pops
So, first of all, cleanse every glitch, just use the draw pen tool in the sample editor, zoom the view to the max, and draw a curve, where you see glitches, or suspicious notches. Then, check the instruments settings tab : every slider here is important ! Each sample can get its own volume level, even in multisampled instruments. DO NOT compose directly without this first stage in the process, you should regret it then later. If your instrument produces clik sounds or pops sounds when played, just create a Volume – type envelope with a slower attack.

* fix the hiss
You’ve recorded your voice? Your crappy microphone produces a resonnant hiss ? Ok use the spectrum analyser. Put the + of the mouse where the resonnant hiss visually appears and note somewhere in a paper the frequency concerned. You realised that it’s located @ 5KHz ? Ok, add a filter to your recording track, with a bandstop model for example, define a fine Q and cut the frequency where the hiss appears. If you want things to be neat you can apply the effect directly on your sample so that it’s clean one for good.

* preamp that mike
Do you think that your vocal recordings are not as high as expected or as high as the other instruments of your selected soundset ? No problem, use a Mixer EQ, it will be behave like a virtual pre-amp. Then apply the effect. Try to « normalize » all your samples, so that their output level is psychoacousticaly « similar ». Just try to start to work with the highest sample volume from the beginning, and after you’ll just have one philosophy : use EQ (or filters) to minimize or directly cut some frequency bands.

Then play with your instrument with your qwerty keyboard.

* multilayer
Does it sound flat ? Okay no problem : multilayer it (I recommend 4 or 5 layers with 4 or 5 copies of the same sample) and finetune a bit each sample on each layer with very subtle differences. Define some large stereo panning for each sample (-25 for the first +25 for the second, and + 50 for the last one) and you’ll initially get a best sounding instrument. This technique based on slight detunes of sample copies is often used in trance based synths. But you can also use it in lots of other genres with satisfying results. Note that multilayering/detuning isn’t recommended for anything. Low frequency sounds have to be centered and not enlarged. Be carefull with the multilayer thing, your instrument volume will be multiplied by 4 or 5, it’s logical, so reduce each multilayered sample volume !

* the final touch to your instruments
Now EQing. When your sample is ready, when the instrument envelopes and eventual layerings are defined, you can add the last touch to your sound with other filterings through envelopes in the instruments settings tab, and EQ in the track dedicated to your instrument. Your instrument can sound radically different with a specific EQ. The first EQ you’ll add in your chains, will just give to the instrument a special coloration. With the new Renoise EQ5 or EQ10 you’ll get an even more easy way to change frequency bands without using anything else than the mouse. I must confess that since I start my chains with the highest possible volume, I tend to use my EQs more like filters to lower some frequencies bands, more than to raise them.

More presence : what DSP I like to use in individual tracks

* Gainer
Everybody knows how to improve the loudness of the sound, simply, with the Gainer unit that adds a 12dB boost. And I do like everybody, so you can do like me too. But with too high params, the gainer will produce unwanted peaks / transients.

* Distortion
You should then try the « distortion » unit. There are some instruments that can go through some distortion models and don’t lose their caracteristics too much, and the advantage is then that you easily get a quite empowered sound, with no bothering transients. But the problem with the distortion, is that it is often « harsh ». You can willfully chose make sound harsh and agressive, but this strategy will be completely useless when you’ll need to add a emotions and a sensitive touch to your lines.

* Cabinet simulator (+ Auto DC Offset)
You can then try to use the strange Cabinet simulator, that integrates a nifty pre-amp, and a 5band EQing phase in the internal routings, so that you can cut unwanted side effects with a proper EQing. You’ll notice that I’d recommend to use an auto-dc offset after a Cabinet simulator, especially when the chosen parameters are very high. The cabinet simulator has a strong personality and you’ll sometimes need this auto-dc offset dsp in the end.

* Delay (<30ms) & larger stereo image / Track’s width / Surround (in the Stereo Expander DSP)
The other set of tool you can use for a better presence is tools that are controlling the stereo image, directly or undirectly. For example : the track width parameter (located early in the left side of every track dsp chain). Or you can add a stereo expander’s dsp surround. Or you can use the delay dsp unit. You should simply think about the power of delays, not only the long ones but also very small stereo delays, less than 30ms. The larger the stereo image is, the better the presence will be.  BUT be extremely carefull with <30ms stereo delays, the more you add feedback, the more the sound volume increases, the less you control the saturation produced… so use very, very low (or null)  feedback rates. Warning, I DO NOT recommend you to use the stereo widening technique with lower frequencies sounds such as the kicks and sub bass because those sounds will allready collide well, at the center of the stereo listening field, and don’t need to be amplified more. Of course, the Chorus effect is supposed to increase the presence, but what you’ve got in DAWs is often not a « real » chorus, it’s an emulation, and I’d prefer using something more natural like duplicating the track notes/column and adding some pattern-command based vibratos on the duplicated column.

* EQs
Now EQ. My EQ are more and more often placed at the very end of my classic tracks chains. We’ve seen that you could add some specific color to your instrument with an EQ. But I must say that I use EQs to fix little things on a track/instrument that is globally too high at defined bands and that generates a loss of a necessary headroom & in the end a loss of dynamic range. So I often use it to lower frequencies since I always start with very high-sounding instruments. That’s gloablly the way I simplify the mixing phase and make it faster.

Sidechain « compression » / Sidechain « Distorsion »

* follow me
This technique uses the *signal follower meta device. It is used to keep a fat, constant and punchy sound level on BIG FAT instruments like kickdrums. It has to be used nowadays, why ? Because the kickdrum sound has become very… massive, and the distorted / amplified bass sound has also become as massive as the kickdrum sound, and both cannot somehow fit together in the spectrum without destructing themselves. That’s why you’ve got to make a choice and compress or un-distort the bass sound but only when the kick level is too high. « Sidechaining » the compression means that the compressor / distortion action is modulated by the level of the signal coming from the kick track. In Renoise, you’ve got to put a *Signal Follower device on the kickdrum track and you’ve got to link it to the any fat bass track.

IMPORTANT (1) : in the *Signal Follower, you’ve got to reverse the low and high params so that the threshold will raise when the detected signal is low in the source track. Sometimes you don’t need to use a compressor in the target track, just lowering a bit a « distortion » level is often satisfying. That’s what I call a sidechain distorsion.

IMPORTANT (2) : don’t compress / modulate sounds that do not fight with the one you want to preserve. I mean that you should not lower for exemple high frequencies of a pad, especially if they do not harm the kickdrum sound. That’s however a mistake that some producers make. They want the kick to be heard so much that they put a sidechained compressor on everything. Of course, it means that nothing is more important than the kickdrum, the rest can be lowered, what is a cynical definition of a successfull music.

NYC / Parallel Multiband Compression

* the wrong multitrack overcompression strategy

Compression is very easy to understand but a very hard thing to master. You don’t compress the same way a bass or a kick or a snare. The shape of a sound defines the way the compressor has to behave (slow or fast attack, slow or fast release). You can  of course put a compressor on each track, on each instrument. But with all those compressors, you’ve got several problems that will only happen when you’ll mix all your tracks alltogether. First you’ll get the compression units delays to compensate – thanx to renoise this is now automatic but it consumes some precious CPU power. And then, if you add too many compressors on your dozens of tracks, each instrument will easily take all the place and you won’t be able to create some more complex and living layered melodic structures. You’ll hget the feeling that everything is saturated and the listening experience will quickly become uniform, without any dynamic range, and increase ear fatigue. I’d recommend you then to compose your music through a compromise, with a simpler and typical mix setup that’s called « parallel multiband compression ».

* compress frequency bands instead of tracks
Instead of compressing each different track, you compress different frequency bands. For now, the *Multiband send meta device allows you to treat separately 3 different frequency bands. I would like to have 4 bands… like in the Ozone thing, but 3 bands it’s allready quite good and effective.

* to preserve some dynamics, make it parallel
When you compress something, it’s bigger and warmer BUT if some transients are cancelled, you increase the overall uniformity of sound, with the feeling that the sound depth is always the same : in front of you, massive, in your face. When we speek about a parallel compression, we’re  just talking about mixing a compressed sound with a non-compressed sound. Wisely mixing the original sound AND the compressed sound will preserve some interesting dynamics and will allow you to keep some usefull depth range inside the listening field. The parallel (or NYC) multiband compression uses this kind of strategy and finally allows you to get a high level of a clear sound on the whole spectral activity, but some depth is still available that’s the most interesting thing.

You achieve a NYC MB compressor, easily within renoise, through an appropriate SEND TRACKS setup. WARNING : mixing 2 sounds implies that the overall sound is naturally higher ; and this technique is so powerfull that I’d recommend to LIMIT the sound in the master track (with a Maximizer), and even to add a simple B-W 4n filter that progressively cuts the infra-sub-sounds before 50Hz, just before the limiter in the chain. If you don’t cut those infrasounds they finally take high place in the headphones/speakers and often produce some unwanted ear fatigue.

* How to define the Maximizer’s ceiling level (manually)

Concerning the Maximizer, the Ceiling parameter could be very different between two songs… you ‘ll have then to determine it « manually ». Here’s how : first disable soft clipping, then, auto-gain ON, then you’ve got to put the ceiling value to zero. To increase the presence, you’ll have to increase the Boost value. The more you increase it the more the sound will be saturated. However the auto-gain could lower more the final volume . In the end you could loose what you’ve won. If it happens, try to lower the ceiling value a bit BUT be aware of the fact that lowering the Ceiling could also lower somehow the dynamic range. Also if you add 3dB to the Boost value you should not remove mopre than 3dB to the Ceiling level.

* excite some harmonics

We’ve seen that you needed to uncheck the Soft Clipping button in the master track what avoids a lack of control, a redundant limiting mechanism and eventually a loss of signal punch. If you re-check it later for any reason, or if your compression levels are high, you should think to add an harmonic exciter at the end of the setup. Good think, an exciter has been introduced in the 2.8 version. I’d recommend something « subtle » with this beast, because if you’ve previously degraded / aliased (through a LoFi DSP) or distorted (through the Distorsion unit) your sound, this tool’s really able to sharpen it so well that it could become somehow very irritating, especially within higher frequencies. I even recommend you to add another extra LP filter at the end of your mix, that cuts the 1.5 KHz  that goes beyond 21 KHz, that are very harsh when sharpened. Those sharpened sounds are often cristal clear sounds, you could love it first but after that, have some difficulties to limit the whole result with the maximizer. So keep in mind that everything that can increase / improve the overall sound, implies a headroom loss as a drawback. Another good reason to use this beast with a lot of care,because a smaller headroom size will make your final limiter (Maximizer) less efficient in the end.

* the mono solution

Your instruments are often panned, autopanned, reverbs let some moving signal tails in some unexpected places of the stereo listening field. The headache come with low frequencies, that are eating the higher ones, and when you automate pannings of low frequencies outside of the center of the listening field, fixing their level quickly become a pain in the ass if they move all the time between the center and the borders. You’ve got a simple way to work it out for good : turn your first band (the low frequencies) into a pure mono sound. That’s radical, but it works. Put a stereo expander device in the first send track of your multiband mechanism, and select the mono preset : you won’t regret it.

* don’t sound like me by default
Here are some « default » parameters for the NYC MB compression setup. WARNING : first of all I’ve fucked up my ears (and recently my speakers). So the parameters are highly influenced by my musical choices, my (crappy) sample choices, my physical and hardware limitations. Then, like everything that’s defined by ‘default’ it’s not adapted to all the situations as I allready said from the beginning. If you want to sculpt YOUR sound you’ll have to tweak it a bit, otherwise you’ll sound like me but not like YOU. Globally, you will first tweak the « attack » parameter and the frequency bands in function of your the spectral response of your selected instruments. A longer attack delay will preserve some punch in your kicks if they are located in the first band for example. But if most of the kicksound is located in the second one, you should define a longer attack here. Then you’ll play with the makeup. Just try values but nothing higher than 12dB. You’ll notice that youll need to add more makeup on the lower bands than on the higher.

Master track

*Exciter (Renoise 2.8 only)
Low : 0.15 KHz | High : 4.5KHz | Mid-side on each band, with nothing > 20% sharpness max, and 100%  amount ; be carefull with it, think about your Maximizer that could have some troubles with some unexpected cristal transients. Think about the parallel mechanism that needs a 50% mix between the original sound and the compressed sound, that could not work as well as expected with too high parameters. Think about the headroom, that needs some preservation, so really, if you want to improve a lot some dull sounds, put the exciter directly on a track’s chain, and play with the *exciter there, NOT too much withing the NYC MB compression scheme.

*Filter (HP)
Butterworth 4n
HP Cutoff 0.05KHz
Inertia Instant

*Filter (LP) – Optionnal, only if your mix produces some hard/irritating HF sounds – see *Exciter comments
Butterworth 4n
LP Cutoff between 20.0 KHz and 21.0KHz
Inertia Instant
Note : in some of my mixes I’ve put this filter AFTER the *Maximizer, what looked logical & efficient at first BUT in other mixes, it lowers the dynamic range of my track in the end (causing the AGC automatic gain control to react in a wrong way).  So test this filter before the Maximizer first, define your Maximizer settings, then test it after ; if placing it after Maximization doesn’t change anything in your levels, keep this filter after the Maximizer.

* You MUST check the average input level of the sound in the Maximizer * and do something to make it close to -12dB!

Arrange your mix as you want, tweak the values ​​in your makeup compressors, lower thresholds, level, filter, sidechain, set the track levels on the mixer, but make the fucking overall sound level taken JUST BEFORE the *Maximizer keeping a minimum »headroom » of 10 to 12dB (one would rather choose 18dB or -20dB). If you do not know how to properly measure this minimum margin of -12dB, just look at the post mixer of the Master vu-meter located at the top of Renoise, your overall cumulative levels must sit in the middle of the Vu-meter, and should deport on the right side from 20% to 25% when performing choruses for example. If necessary, install a free K-metering external plugin, for a better accurate visualization of the output level. If you do not follow this simple idea of preserving the dynamic range, the * Maximizer will simply saturate / distort the sound to death, and finally, mixing and mastering the song will be like driving madly in the fog, the auditory discrimination space required will decrease dramatically, and all your efforts to create a compromise between power, and fidelity will be destroyed.

*Maximizer
Boost 0.0dB (could be raised to 1 or 2, you could have to raise it to 3 max but don’t go further, the Boost will often be the Positive value of the manually defined Ceiling’s param)
Treshold 0.020dB
Peak Rel. 1ms
Slow Rel. 60ms
Ceiling : zero by default, manualy lower it, if the auto-gain reacts badly (see my previous explanations)

Send name : NycMBC

*Multiband Send
Amount 1, 2 & 3 : 0.00 dB
KEEP, Stage1
KEEP, Stage2
KEEP, Stage3
LOW 0.15KHz
HI 4.50KHz
Allpass filter mode (2.8 only)  : avoid the LR2 model that creates in some situations phase shifts & partial sound cancellations. The Steep FIR has the best quality (but warning, it eats too much CPU power and muffles some sounds), a good compromise is the LR8 model that combines punch/agressivity and clarity.

Send name : Stage1

* Stereo Expander
Preset : mono

*Bus Compressor
tresh -24dB
Ratio 3:1 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Attack 0.10 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Release 1ms << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Makeup 12dB
knee : 00 dB

Send name : Stage 2

*Bus Compressor
Tresh -12 dB
Ratio : 1.5:1 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Attack 0.50 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Release 1ms << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Makeup 6dB
knee 0.00dB

Send name : Stage 3

*Bus Compressor
Tresh -0.20 dB
Ratio : 20:1 << you’ll have to change it and adapt it to your needs, style, samples, and feelings
attack 0.10ms << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Release 1ms << you’ll have to change it and adapt it to your needs, style, samples, and feelings
Makeup 6dB
knee 12dB

*** Please don’t forget to KEEP the sound in your *Multiband device sends, otherwise it will be a simple multiband compressor but not a true parallel multiband compressor.

Exceptions

When this setup is ready, just route every individual track to the the NycMBC send track and MUTE the sound in the source tracks.

When you need a lead track or an instrument that is allready very high, to be more easily noticeable (front-line perception), you can sometimes avoid to send it to the multiband compressor track, and route it directly in the master channel. A track-instrument that escapes from a global treatment will be noticed by the human brain. Note that this exclusion has to be unique, and exceptionnal.

Notes Editing tips

* NNAS or not NNAS :

My philosophy is to globally use one track/columns for one instrument/sample. First of all, because it’s easier to work on complex arrangements with that rule and get a clear vision of the setup.

Once your music has lots of loud instruments, those loud instruments « fight » together in the spectrum. If you want all the available spectral space for the sounds you should completely stop what has no interest anymore. You’re not obliged to play some samples ‘until their end’. You’ve got to know that the human ear « sort » sounds and concentrate / focus on the new ones and internally lower or even don’t percieve the old ones ; yep in fact the human brains treat sounds, and sometimes, when a sound is heard in a repetitive way, the brain could continue to « hear » this sounds exactly the same, even if its tail has been cut in the end (it’s called remanent perception). Considering these psycho-acoutic facts, sustaining some big sounds is sometimes a futile strategy that waste some usefull spectral place & headroom. So you’ll sometimes have to CUT those sustained notes from kicks, bass, keyboard hits, and whatever has a big sound impact, just when a new note happens.

To cut the sounds at thye right time, you’re lucky, you’ve got a « soundtracker », that « naturally » cuts the sound when a new sample note is inserted in the column line. You just have to carefully group the samples in the same track / column where you want samples to be cut whan a new sample happens.

This method gives quite good results with drumkits. Warning : if you defined specifics NNAs like « Continue » or « Note off » in the instruments settings, the tracker will create a new track « internally » on the fly so that the old note will sustain when a new one happens. You should only use NNAS for something like pads, crash cymbals, background melodic lines.

It’s up to you, to define clearly in the Instruments settings, if a sound should continue or not when a new note happens in the same column of the same track. If you don’t want your sounds to be cut in an unnatural way okay you could eventually add a very subtle reverb on it with a small room size…

When you work in Realtime, Render in Realtime [registered version only]

You’ll understand that a registered version of Renoise can render musics : i.e. export a WAV file that will be played on your system or that could be converted into a .flac, .mp3, .wma, .ogg, … file format to be shared online in your favourite community. And the registered version have more than one rendering mode. You’ve got the offline mode & the realtime mode.

Rendering offline a song theorically allows you to get more output options (more sample rates, more bit depths, more interpolation modes). It could be cool if you target an output format & quality that goes higher than what your limited CPU or laptop could produce in realtime. However there are very subtle differences between 44KHz and 48Khz computations for some DSPs. There are also subtle but real differences in the overall mix results especially when you’re using some IR based VSTs (impulse response reverbs, for example), or even some native IR-based DSPs (Cabinet Simulator, Multiband Device…). You’ll quickly understand that the overall result could be as better as « different », and finally, this « difference » could be so important than all the previously mixing & mastering efforts could be lost.

I highly recommend you to work your mix in the « target format » you’ll expect to reach in the end. If you work in 44KHz 16bits, you export in 44KHz 16bits. If you work in 48KHz 24bits, you’ll export in 48KHz 24bits. Modifying the samplerate during the rendering phase will force the renderer, some  internal timers in VSTs or some DSPs to behave slightly differently.  If you want to be sure that what you initially hear is what you finally get, the unique way is to chose for now the realtime rendering mode.

And that’s all folks

What ? Nothing else ? Hey just try what I’ve recommended to you, at first, we’ll see then when you’ve finished if you really need something more.

kurtz/lapiNIC