Music is omnipresent. Everywhere we go in our day to day lives it encapsulates us (except for Aldi stores, have you ever noticed that there is no music played there)? I am certain that after the early humans had crafted the tools essential for survival they then turned their minds to the arts including creating sounds and melodies for their enjoyment. Sounds that emulated those they heard in nature and the world around them. Music resonates into the human spirit. It can take us to places in our past, bring peace to an anxious or grieving heart or provide confidence and encouragement to an advancing army. Like most of us, I have been listening to music all my life, but was I ‘really’ listening? Analysing what I was hearing and critically focused on how the sounds that made up the delivery of the whole were created and to what end? The short answer for me, is ‘no’.
Despite being a classically trained pianist and clarinettist (where I definitely scratched the surface of this proficiency), I have now chosen a career as an audio engineer where critical listening is a skill paramount to my success and one that requires a great deal of training and practice. What a listener hears is ultimately controlled (using hardware and software), by the engineer who acts as a gatekeeper of sorts between the performer and the recipients of the song. As Corey (2012, p.ix), states “Most of these subjective decisions are in response to the artistic goals of a project, and engineers must determine, based on what they hear, if a technical choice is contributing to or detracting from these goals. Engineers need to know how the technical parameters of audio hardware and software devices affect perceived sonic attributes”.
The purpose of this blog entry is to provide a concise framework with which I will begin this journey of ‘deconstructing’ a piece of music, to understand what techniques were employed to deliver certain sounds and learn the skills essential to becoming a successful sound engineer. To this end, I will describe the main elements key to the critical listening process which after much research I believe can be largely divided into the following areas:
-
Dynamic Range
-
Spectral Balance &
-
Spatial Characteristics
Dynamic Range:
“Dynamic range in the musical sense describes the difference between the loudest and quietest levels of an audio signal” (Corey 2012, p. 78). All instruments (including the human voice), have a dynamic range, some with a wider reach than others. Different styles of music therefore generally have different dynamic ranges due to the instruments prevalent in that genre or style. Dynamics are a key element to a piece of music and help to express the intent of the artist to the listener, whether it be a feeling of excitement, despondency or sentimentality. The dynamics of a particular song however are something an engineer needs to be quite attentive to, as any extreme differences in volume need to be managed effectively in order to deliver a comfortable listening experience to the audience. Several hardware and software devices are available for an engineer to achieve this goal and these include limiters, gates, faders and compressors.
Whilst using a fader (volume controller), either manually or by using automation may seem to be an easy fix, it is certainly more common for an engineer to utilise ‘compression’ to ensure any peaks in amplitude are reduced to an acceptable level. A compressor basically works by ensuring a peak in the audio signal is ‘squashed’, if it reaches a predetermined threshold. The wonderful thing about using compression is that due to the peaks being brought back to an acceptable level, the softer (quieter), parts of the piece can then be increased using ‘make up gain’ which usually results in a more consistent and enjoyable listening experience.
Spectral Balance:
Spectral balance is a term used to describe frequency ranges of a sound source and also how powerful (amplitude), each frequency is within that range. Each instrument has a specific frequency range as can be seen in the chart below.
Knowing these frequency ranges greatly assists an engineer when using one of his most powerful tools, the EQ (Equalizer). As you can see from the chart above: whilst each instrument has their own unique range, most instruments overlap and when seeking to create a successful mix an engineer needs to ensure that each instrument (including vocals), is able to be heard and not overwhelmed by another. The equalizer can be used to reduce frequencies from a particular sound source that is interfering with another that you want to be more present in the overall mix. Another handy tool for the engineer are low and high pass filters (example high pass filter below where x axis is frequency units and y axis is dB value). You will note that the software ‘plug in’ filter in this example has been set to cut all frequencies from 200Hz downwards.
These filters can remove or reduce certain frequencies from a sound source that may have been causing interference either in the low or high frequency range. The goal for a budding sound engineer is of course to be able to ‘hear’ whether the spectral balance is not working and understand ‘why’ due to their knowledge of all instruments fundamental frequencies. Corey (2012, p. 25), elaborates: “To determine the equalization or spectral balance that best suits a given recording situation, an engineer must have well-developed listening skills with regard to frequency content and its relationship to physical parameters of equalization: frequency, gain, and Q. Each recording situation calls for specific engineering choices, and there are rarely any general recommendations for equalization that are applicable across multiple situations. When approaching a recording project, an engineer should be familiar with existing recordings of a similar musical genre or have some idea of the timbral goals for a project to inform the decision process during production”. A tool that can assist in this learning however, is a spectral analyser which will show the frequency ranges of the recorded signal and also the power that each frequency is emitting.
Spatial Characteristics:
Even someone with a rudimentary understanding of how sound moves in their surroundings will understand that a sound emitted in a small room with no windows appears very different to the same sound in a large concrete car park. This sense of space is one that is very important to hear and understand when critically listening to a piece of music. An engineer seeks to create this ‘panorama’ of sound dependent on what type of emotion and / or sound stage he is trying to recreate. This may be to give the listener an impression that the music is being performed live in a small pub, a large open field or in a timber floored church. A good understanding of spatial characteristics is also used to create space, balance and clarity for each instrumental element in the overall mix. Three primary methodologies are used to create this sense of width and depth in a recording; Reverb, Delay and Panning. Essentially reverb is useful to construct a feeling of depth or distance into the recording. The more reverb, the further away a sound appears.
The same can be said for delays although very short delays can also make a sound bigger or fuller to a listener. The third effect, panning, brings width and gives the audience a sense of left and right which is of course how our ears operate being on each side of our head. This conveys movement and action and a surrounding excitement to the song which can be effectively used to entice a listener.
Creating this panoramic space in a song brings life and a sense of movement or ‘location’ to the listener. It can also be effective in evoking certain feelings associated with that movement or location (basic panning diagram at right).
In conclusion, whilst I have covered the three primary methodologies essential when planning to critically listen, there are other elements such as timbre of instrumentation that will need to be spoken about when writing a case study on a specific song. My case studies will also include objective song structure and other technical identifiers, the perceived meaning and relevance of the lyrics / style based upon the genre and historical placement of the piece and some analytical listening which will be more subjective. The core of the case studies however will consist of the Dynamic Range, Spectral Balance and Spatial Characteristics elements as described above.
References
Corey, Jason. Audio Production and Critical Listening : Technical Ear Training. Burlington, US: Focal Press, 2012.
Compression image courtesy of https://www.slideshare.net/music_hayes/dynamic-range-compression retrieved 02/07/2017
Goose Bone Flute image courtesy of http://stravaganzastravaganza.blogspot.com.au/2012/06/bones-used-to-make-tools-games-jewelry.html retrieved 02/07/2017
Dynamic Range image courtesy of http://prosoundformula.com/how-to-master-a-song/ retrieved 02/07/2017
EQ Chart image courtesy of http://www.audio-issues.com/music-mixing/all-the-eq-information-youll-ever-need/ retrieved on 04/07/2017
Spectral Analyser image courtesy of https://www.izotope.com/en/products/repair-and-edit/rx/features/utilities-and-workflow.html retrieved on 04/07/2017
Panning image courtesy of http://www.uaudio.com/blog/studio-basics-mixing-stereo/ retrieved on 04/07/2017