Music Production

As future music educators and technologists, it is imperative that be familiar with music production. It is very likely that we will, someday soon, have to produce a professional sounding recording of a school band, create backing tracks for students, make voice-overs for school recordings – the possibilities are endless.

Data Types

When it comes to digital storage, music data is usually found in two different forms: MIDI or digital audio.

– MIDI

MIDI is, essentially, a series of instructions to be re-interpreted by another device that houses a soundbank. These instructions tell the device all sorts of data, from which sounds in the bank to use to when these sounds should be activated and silenced. MIDI information is useful because of its universal accessibility and the myriad ways different programs can exchange and interpret the same MIDI file.

– Digital Audio

Digital audio refers to the recording, storage, and reproduction of an audio signal in digital form, as opposed to analog; basically, music files stored on your computer or iPod instead of your cassette tapes or vinyl records. The difference between digital form audio and analog form audio is the sampling employed by digital audio. Where analog audio smoothly preserves and reproduces waveforms without difference, digital audio samples the waveform and has various rates of accuracy. My Technology for Music Educators professor, Dr. Craig Sylvern, explains digital audio in-(bit)depth in the following video.

The sample rate, or rate at which samples are played back, is measured in Hertz, or samples per second. Bit depth represents the vertical precision, or amplitude resolution, of the digital file; this is why digital audio can have such a wide variety of quality. When put together, these two terms represent the aggregate quality of any digital audio file. The higher the sample rate, the higher the frequency range of the recording; the higher the the bit depth, then the more dynamic range (or greater sound floor, as some may say) the digital file will have. CD quality digital audio will have a sample rate of 44.1 kHz and an amplitude resolution of 16 bits.

The most common form of storage for digital audio is the ubiquitous mp3. Mp3s are the most popular form of digital audio storage because of their economical nature: these files do employ compression to achieve smaller storage space, therefore losing some of the higher frequencies available without compression, but they do so in a very smart way. Mp3s omit frequencies that typically aren’t heard or won’t be missed by most of their audience. In achieving a smaller size without destroying integral content, mp3’s have managed to become the format of choice for listeners everywhere.

The video below showcases the aural differences in digital audio quality – see if you can hear the subtleties, especially in varying bit depths!

Processes

– Looping

Looping is a useful feature found in both high and low grade production/recording programs. Whether you want to record your own musical loop, or if you want to use one of the professional recordings included in many digital audio workstations, the bounds are limitless; with the usage of loops, both musicians and non-musicians alike can easily make quality-sounding mixes in a matter of minutes.

– Sequencing

Sequencing is another useful feature, not too dissimilar from looping, found in digital audio workstations. Analog sequencers, such as drum machines, are not too popular anymore. Sequencers nowadays are most often digital and included in or marketed as a separate program; these digital sequencers allow you to input MIDI data and repeat that data’s playback in a sequence. Again, the creative possibilities available with sequencing are astounding, and it is imperative that we be familiar and knowledgeable when it comes to sequencing if we are to succeed as future music educators or technologists.

– Signal Processing

One of the most important parts in music production is the audio signal processing. Without signal processing, recordings can and often sound flat or boring. With the power of signal processing, we are able to do all of the following and more:

  • monitor and change equalization levels (perhaps your school’s choir is a little heavy on the low end?)
  • add an echo, reverb, phasing, or flanger effect (maybe your high-school’s rock band recording could use some cool effects? Or perhaps that same choir couldn’t get a chance to perform in the otherwise reverb-heavy multi-purpose room today… add a reverb effect to their recording, and impress your students!)
  • add a filter effect (to block out certain frequencies; i.e. air conditioners or other auditory pollution in an otherwise controlled recording environment)
  • add a pitch shift effect (even the best of us have less-than-favorable performances at times)

– Sound Design

While many universities offer multiple degrees in sound design, and the field gets more and more complex everyday, it is still useful for us as music educators and technologists to know a little about and feel relatively comfortable with the topic. Sound design is a broad term, covering everything from ambience-centered composition to waveform structure and manipulation. I’ve included a couple of videos below by Moog and Dr. Joseph Akins of Middle Tennessee State University on the basics of sound design through the use of synthesizers. I found this series to be quite informative and useful when I first started learning about electronic music and sound design; check it out if you’re curious about the process!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s