Unit 35 Task 1


All ‘sound’ is just our brain’s interpretation of vibrations made by external stimuli. This is explained by acoustic theory. Vibrations created by a source such as knocking a door come out as sound waves. The higher the frequency of these waves, the higher in pitch this sound will be. For example, D and C in the same octave on a piano differ in that a wave that sounds like D has a higher frequency than C.

A wave’s frequency, wavelength and amplitude are all linked; the higher it’s frequency, the shorter its wavelength as the wave has less distance to travel between each peak and trough. Meanwhile a wave’s amplitude translates in the ear to loudness of the sound.

Acoustic theory is relevant in all sound sources, not just synthesised ones. Many instruments, such as guitar and piano, employ strings; here, the pitch as a note is determined by how rapidly the string vibrates (as this produces a higher frequency of wave). Thicker strings vibrate more slowly which lowers pitch, while the length of the section of the string being made to vibrate also makes a difference (which is why you can change notes by holding strings down to different frets on an instrument like a guitar).

When we create a synthetic sound to use in music or for other purposes, we employ additive synthesis in which we add together multiple of (usually) the purest waveform, a sine wave. This gives a sound character and doesn’t sound a intone as if it’s just made by one wave. This is often shown in the ‘voices’ part of a synthesiser, with which the music technician can decide how many partials should make up the sound they are sculpting.

Adding together sine waves of different harmonics creates different kinds of sounds; if they are at harmonic intervals, the sound will be brighter due to subtle harmony, while if the synthesis is inharmonic the sound will be darker due to subtle dissonance which is naturally somewhat displeasing and unresolved to the human ear.

Integer multiples will likely be harmonic as they are divided by whole numbers, while non-integer harmonics are less symmetrical in nature. Nonetheless, despite these partials being added together there is a fundamental frequency which is the one we most hear in a synthesised sound. So when you hear an A note on a flute, you’re actually hearing a few other notes coming together to thicken the overall sound the flute makes; ‘A’ is simply the fundamental frequency.

Subtractive synthesis meanwhile is the process of filtering out certain partials of a sound in order to sculpt it. The most common use of the technique is to create an ADSR envelope, which engineers a sound to sound more naturalistic. ADSR is broken into four stages. Attack activates at the press of the key mapped to the sound, and usually has at least a fade-in effect wherein partials gradually are brought back into the mix. Decay removes some again, winding down to a period of stability called Sustain which the note will now stay in for the duration of its activation. Finally, when the key is lifted, we have the Release wherein partials are once again removed to wind the sound back down to silence. Here is an example of ADSR: https://soundcloud.com/lloy2-908796191/sets/examples This sample makes use of all the elements of ADSR.

This very much humanises the otherwise cold and robotic nature of a synth, and the sound is comparable to that of an orchestral instrument in legato articulation. If you listen to the way a violin is played for example, you’ll notice the musician employing their own organic sort of ADSR into their performance to breathe life into it. You can see an example of this here: https://www.youtube.com/watch?v=fX2oi4uAPQQ 

Meanwhile the sound on the whole can be manipulated further using filter cut-off. This technique shapes a sound to only make use of certain frequencies. You might like the bass part of a synth but not the treble part; one solution to this would be to simply use a low pass to mute the relevant high frequencies. A resonance filter meanwhile defines a section of the audible frequency spectrum wherein a synth’s sound will be amplified. The closer to the cutoff frequency, the higher the amplification.  An example can be heard here: https://soundcloud.com/lloy2-908796191/sets/examples
In this sample, the first few notes lack any resonance while the second part has resonance all the way up to show the contrast.

Many synthesisers make use of low-frequency oscillation (LFO). Essentially this produces a frequency usually below 20 hertz that serves to modulate the focal sound of the synth. It has a rippling effect. This frequency can be applied to different functions; mapping it to amplification has a tremolo effect where the loudness of the sound is modulated, while using LFO for pitch makes a vibrato effect. This can also be assigned to stereo panning, filter cut-off and so on.

ADSR envelopes and LFO are both examples of automation to keep a sound interesting even as it keeps going; LFO essentially decorates the otherwise static sustain period of an envelope. Frequency modulation (FM) synthesis changes the timbre of a simple waveform such as a square by modulating it frequency in a similar audio range, creating a complex timbre; this is a bit like LFO but in a more audible part of the hearing range.

There is also granular synthesis, which is based on a similar principle to sampling but plays these samples in tiny ‘grains’ of up to only 50 milliseconds (1ms being 1/1000 of a second). One can layer multiple grains on top of each other to create a new soundscape.

Comments

Popular posts from this blog

Unit 14 - Task 2 ; Sonic Analysis: Paranoid Android

Unit 35 Definitions

Unit 14 Task 4 - Texture of resources and acoustic environment in recordings