Mixing can often be as demanding as actually writing the song. For bedroom producers in particular, mixing your own tracks can lead to unwanted results due to lack of fresh ears. I’m going to discuss with you a trick to mixing a song better, and surprisingly, it is not carried out during the mixing stage, instead it is found during the very first decisions in the songwriting process.

Let’s look back a few hundred years. Music producers (or should I say composers) used a sheet of paper to produce a track. Their music would eventually translate into an orchestral piece and the performance was always live. This meant that mixing was not possible. So, how did they get it to sound right?

The reason an orchestra has so many instruments is to strike a broad range of frequencies and sounds (timbres) covered. Look at the strings and you have violins, violas, cellos and double basses – and often plenty of each. If you write a song and want more low end, you increase the number of cellos and double basses. This will sound much better than the equivalent of using only violins and using an EQ to bass boost the low end.

We can see from music of the past, that by developing melodies with the correct instrumentation, it is entirely possible to make a song that almost mixes itself. his can be very handy if your music theory skills are better than your mixing skills.

So here’s an exercise for you to try out, try writing a classical piece of music in your DAW. Versillian Studios have released an excellent collection of free orchestral samples and SFZ for you to play about with. In addition there are free plugins like DSK Virtuoso , here’s a brief list of several more . The aim here is to write a song, and get a feel how you can use the different instruments to strike a balance in the mix before you have added any effects. Use only the instruments/samples with no extra effects, and see how well you can mix the track with instrumentation, panning, and volume alone. Really plan what octaves and harmonies sit well with each instrument, using their natural ranges to your advantage. A really good article by audiorecording.me shows some of the panning positions of the instruments, as well as the reverb settings if you want to continue working on the song you’ve made.

Once you’ve tried that, I want you to think about when you are designing a synth patch. Could this sound be achieved by using more than one patch? For example a Skrillex style “toot” that could be made with a separate synth tracks for the sub, and the highs? There is a good chance this will sound better than really forcing low end out of a synth patch with an EQ or exciter, only to find it becomes a muddy mess. Separating the sub frequencies from the main timbre of a synth can be essential if you are heavily processing a sound. Particularly, adding reverb is great for higher frequencies, but you really don’t want to have reverb on the sub.

Let’s move on to assume you have a well composed song, where things fit together, let’s now look at adding some effects to really gel it together. One thing to remember is that effects can drastically alter how we perceive sounds, and can turn natural sounds into unnatural sounds. Imagine an orchestral piece where the violins had to be drastically EQ’ed to fit in the mix, it wouldn’t sound right because it will have lost the frequency “fingerprint” that we identify as a violin..

A rule of thumb which I use is the order: 1. Vocals

2. Real instruments

3. Synths

That is the order of priority with which you need to preserve the sound. The human ear and brain is very sensitive to the human voice for obvious reasons, and so it becomes very distinct when something is not quite right with the voice. That, coupled with vocals normally being the main focal point in a track means that, where possible, try to make everything fit around the vocals instead of vice versa. One big exception to this is reverb, though with a bit of consideration why, it becomes obvious. Reverb is a very natural phenomenon which we hear in every environment, and so a level of ambience is always anticipated along with the natural qualities of an isolated vocal.

Real instruments come second because we are very familiar with their sound. If you have a trumpet melody over a synth sound you created yesterday, people will be more likely to notice heavy processing on the trumpets, than on a synth patch they have never heard before. This is why synths come third.

Likewise, you could listen to a mix and realise you want to re-write/re-record something. You obviously don’t want to tell the singer to head home because you want someone else, so try changing synth sounds first. This could either be by flicking through different presets and seeing what is best, or it could be by changing the octaves, or even melodies of the synths.

If that doesn’t work, look towards the “real” instruments, would the same melody be better if it was played by a deeper viola instead of a violin? Perhaps that piano melody clashes with the singer’s frequencies, and you need to move it up an octave. All of these decisions keep the song itself intact, while changing it just enough that everything is balanced better before it goes into the mixing stage.

So while nothing beats hours of practice with mixing itself, you can make your life easier early on. A song that sits well together from the start means it will remain more “intact” through all the mixing and mastering stages and come out as clean as you want it to be.