This story was created for the Google Expeditions project by Twig World, now available on Google Arts & Culture
We’re going to visit Chem19, a modern recording studio near Glasgow, to see the techniques that are used.
The first audio recordings were made in the 19th century. We don’t have a record of how anything at all sounded before this – nobody knows how Henry VIII spoke or exactly what Roman music was like. But since the invention of audio recording, technology has advanced rapidly, changing music with it.
Although devices for studying sound waves came before Thomas Edison’s phonograph, his was the first machine that could record and play back sound. Using needles to record sound on tinfoil-coated cylinders, phonographs evolved into gramophones and, later, record players.
In the early 20th century, magnetic recording onto tape produced improvements in sound quality. Until digital recording became widespread in the 1980s, tape remained the dominant medium for recording sound.
Multitrack technology – recording sounds separately and playing them back together – was developed in the 1940s by US company Ampex and guitarist Les Paul. Before this, studios recorded musicians as they would play live. Afterwards, the possibilities were endless.
The gramophone or vinyl record remained the standard format for listening to music right up until the 1980s. Once thought a dead format, in recent years it has regained popularity, with sales rising dramatically.
New technology often helps the development of musical styles. Without the synthesiser, which can produce infinite artificial sounds, we wouldn’t have modern electronic music. Likewise, sampling – copying and reusing parts of recordings – was crucial to the development of hip-hop.
The CD (compact disc) was introduced in the 1980s. Boasting high-quality digital sound and supposedly near indestructible, it replaced records and cassettes as the standard music medium in the 1980s, before being itself overtaken by downloading in the 2000s.
With developments in technology, music software emulating hardware that once cost thousands of pounds is now a reality. As a result, people can in theory make studio-quality recordings on home computers – though studios are still specialist environments with trained engineers.
The MP3 format, which reduced audio files to sizes small enough to be downloaded, became popular in the 1990s – the first mass-market non-physical music medium. Today, we can stream potentially millions of tracks via our mobile phones.
At Chem19, a band is due to record some songs. Before they arrive, Nick the recording engineer prepares the studio. He sets up the area for each musician. Different bands need different setups.
Today, Nick will be recording a band with a relatively standard line-up – they have a singer, a guitarist, a bass guitarist, a drummer and a keyboardist. He arranges microphones in order to capture the best possible recording of each performance.
Recording the Parts by Twig World
Recording the Parts
Even when a band plays together in the same room, their instruments are recorded individually. This allows the producer to change the levels of specific parts. It also means parts can be rerecorded without the whole band needing to play again.
Miking the Drums by Twig World
Miking the Drums
The drums are the heart of a band recording, and Nick takes special care when setting up the microphones to capture them. Each piece of the drum kit is individually miked, with a microphone overhead picking up the overall sound.
Baffles by Twig World
Sound waves can reflect back off the walls, reducing the clarity of a recording. Soundproofing baffles control these reflections. Baffles do different jobs – the holes here are designed to trap low-frequency signals that make a recording sound “muddy”.
Running Through the Song
The band’s instruments are miked up separately, but they begin the recording session by playing together simultaneously. Nick hopes to hear some of the chemistry developed over many hours rehearsing and performing together.
After a quick run through to check levels and overall sound, Nick gets the band to play a song through several times – “takes”. He’s listening for a performance that is tight but not monotonous, exciting but not sloppy.
The guitar is the harmonic focus of the song. It provides energy, noise and melodic hooks – memorable, catchy parts. Guitar playing falls into two main styles: rhythm (strummed chords) and lead (single notes played in sequence).
Bass is low-frequency sound. The bass guitar, usually 4-stringed, is played in step with the drums. Together they form what’s called the rhythm section of a band, helping to link the harmony and beat of a song.
The drum kit has several parts, mostly played with sticks (the bass drum uses a foot pedal). Most popular music uses a 4/4 time signature, which simply means that the beat of the song can be counted “1-2-3-4”.
Parts of the Drum Kit
The parts of a drum kit are: snare (a short, sharp thwack); bass or kick (a low boom); cymbals (a metallic swoosh); and hi-hat (tick-tick-tick). Bass drum on beats 1/3 and snare on beats 2/4 form the most basic drumbeat.
The keyboard used here is an electric piano, which adds musical colour and detail. Keyboard players can add a huge variety of different tones to a band’s sound by playing synthesisers, which in theory make any imaginable noise.
Monitoring by Twig World
Each of the musicians wears headphones, through which they hear the band’s performance with their own instruments turned up loudest, helping them to hear what they are playing. They control the mix with individual mixer systems.
In the Control Room
While the band plays, Nick sits in a soundproof control room. He hears only what comes through the microphones and into the mixing desk. Traditionally, the roles of recording engineer (responsible for technical operations) and music producer (the creative leader of a recording session) were separated. However, the line between them is increasingly blurred. Nick acts as both in this session.
Producing the Track
Nick guides the band though several takes of the track. His experience means he can give honest feedback about what is and isn’t working and suggest things for the band to change.
Digital Recording by Twig World
Until the 1980s, most recordings were made onto tape. Now, the vast majority of recordings are digital, edited on a computer. There are many different music-editing software packages available, but the most commonly used in professional studios is called ProTools.
Recordings contain multiple instrument tracks. Before multitrack recording, entire performances were recorded with a single microphone – no part could be changed afterwards. Today, there is no practical limit to the number of separate tracks in a recording.
Effects Rack by Twig World
The studio has a collection of effects units that alter sounds. Common effects include reverb (an echoey effect giving a sense of space), distortion (deliberate addition of noise), EQ (affecting specific frequencies of sound) and compression (adds punchiness).
Overdubbing is a process that uses multitrack recording to record new parts along with material already recorded. Modern music would sound very different without overdubbing. It allows performers to play their parts independently of others until they are near perfect.
It also means that many more parts can be recorded than there are members of a band to play them. In fact, there are numerous examples of successful recordings in which no 2 parts were ever played simultaneously.
Vocals are the most important part of most popular music. They are usually recorded in a booth separate from the main recording area to minimise the amount other instruments “bleed” into the mike.
Comping – short for compiling – involves singing multiple takes on separate tracks and assembling an ideal vocal track from bits of each one. Producers isolate individual lines, words and even syllables to create the illusion of a perfect performance.
Microphones pick up blasts of air from “b” or “p” sounds (known as plosives), converting them into ugly popping sounds. To prevent this, singers use pop shields – mesh frames that block the blasts while allowing other sounds through.
The Mixing Desk
Mixing is when all of the individual tracks in a recording are carefully balanced with each other to create a finished song. Even though the tracks are recorded as digital files in a computer, mixing is usually carried out using a physical mixing desk, with its multitude of knobs and faders.
Although the desk can look complex and intimidating, the key to understand it is to see that it is arranged into clear columns – one for each track/instrument.
Faders control the volume of individual tracks. They can be used to make tracks louder/quieter in certain song sections. In the past, this had to be done manually while a song played back – now it can be programmed.
Music is generally released in stereo – in 2 channels. Panning determines whether sounds are heard on the left, right or “middle” (both channels). Mixing involves positioning each instrument carefully so it has space to be heard – panning helps with this.
Effects such as reverb, EQ and compression are controlled in this part of the mixing desk. Effects might be applied in some parts of a song but not others – for example, to a chorus to make it sound more exciting.
Mastering is the last stage of the process, giving the music a final polish. Typically it involves a specialist engineer, who makes sure songs sounds great in all formats – from radio and TV to cheap headphones and expensive hi-fi systems.