audio fundamentals part 1 - christietraining.christiedigital.com/tpcourses/online documents... · 2...

8
1 Audio Fundamentals Part 1 Christie University Objectives Upon completion of Audio Fundamentals Part 1, you will be able to: Explain what audio is, Describe Acoustic Sound Waves as well as Rarefaction and Compression, Define Frequency, Amplitude and dB. Compare and Contrast Analog and Digital Audio. What is Audio Simply speaking, audio a very fast moving pressure wave, changing in height (Amplitude or volume) and width (Wavelength or frequency). This wave is generated by the vibration of an object; for example, speakers in a sound system or simply a pencil dropping on the floor. You ear drum "catches" the sound wave and vibrates because of the pressure waves. The height of the sound wave is called Amplitude, represented by the letter A in the image. Amplitude determines the volume. The further away the wave goes from the Zero line, represented by the horizontal line in the image, the louder the sound will be. Audio Fundamentals Part 1

Upload: phamcong

Post on 09-Feb-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

1 Audio Fundamentals Part 1 Christie University

Objectives

Upon completion of Audio Fundamentals Part 1, you will

be able to:

Explain what audio is,

Describe Acoustic Sound Waves as well as

Rarefaction and Compression,

Define Frequency, Amplitude and dB.

Compare and Contrast Analog and Digital Audio.

What is Audio Simply speaking, audio a very fast moving pressure wave, changing in height

(Amplitude or volume) and width (Wavelength or frequency). This wave is generated

by the vibration of an object; for

example, speakers in a sound

system or simply a pencil dropping

on the floor. You ear drum

"catches" the sound wave and

vibrates because of the pressure

waves.

The height of the sound wave is called Amplitude, represented by the letter A in the

image. Amplitude determines the volume. The further away the wave goes from the

Zero line, represented by the horizontal line in the image, the louder the sound will be.

Audio Fundamentals Part 1

2 Audio Fundamentals Part 1 Christie University

The width of the sound wave is called wavelength, represented by the letter B in the

image. Wavelength determines the frequency.

The audible range of human hearing is 20 to 20,000 Hz or 20 kHz. This graphic shows you

a frequency of 1Hz, or one cycle/ second. The longer the wavelength, the lower the

frequency. The shorter the wavelength, the higher the frequency.

Audio is two things.

1. A physical phenomenon and

2. A perceptual phenomenon

Physical Phenomenon

Audio has an acoustic element: which is sound waves moving through physical

medium this includes things like a solid, liquid, gas or plasma.

Audio also has an electronic or optical component. There are electrons and photons

moving through physical medium such as copper, fiber or air.

Electronic and optical audio signals can be transmitted in a range of formats such as

analog or digital.

Perceptual Phenomenon

Psycho-acoustic effects of sound waves reaching a specific listener are defined by their

human auditory system.

This is experiential and highly subjective. The perception of sound waves can vary

greatly by listener. For example, put on a piece of music that you like and see how your

grandparents or great grandparents feel about it.

3 Audio Fundamentals Part 1 Christie University

The ability of listeners to perceive sound waves is both amplitude and bandwidth

limited. You have a limited range of frequencies you can hear and a limited level that

you can hear. This varies by a lot of factors, such as age.

Acoustic Sound Waves Vibration from a physical source causes sound waves to propagate through a medium.

For example when you drop a pebble into a pond and watch the ripples in the water. If

you were to take a cross section of the ripples in the water - they would look similar to

the graph, which is a sine wave decreasing in amplitude.

In the first part of the course we mentioned that sound was created by a vibrating

object. The vibrations of the object set particles in the medium in vibrational motion. For

a sound wave traveling through air, the vibrations of the particles are best described as

longitudinal. Longitudinal waves are waves in the displacement of the medium is the

same direction as, or opposite direction to, the direction of travel of the wave. A

longitudinal wave can be created in a slinky if the slinky is stretched out in a horizontal

direction and the first coils of the slinky are vibrated horizontally. In such a case, each

individual coil of the medium is set into vibrational motion in directions parallel to the

direction that the energy is

transported.

4 Audio Fundamentals Part 1 Christie University

A vibrating tuning fork is also capable of creating a longitudinal wave. As the tines of

the fork vibrate back and forth, they push on neighboring air particles. The forward

motion of a tine pushes air molecules

horizontally to the right and the backward

retraction of the tine creates a low-

pressure area allowing the air particles to

move back to the left.

Because of the longitudinal motion of the air particles, there are regions in the air where

the air particles are compressed together and other regions where the air particles are

spread apart. These regions are known as compressions and rarefactions respectively.

The compressions are regions of high air pressure while the rarefactions are regions of

low air pressure. The diagram depicts a sound wave created by a tuning fork and

propagated through the air in an open tube. The compressions and rarefactions are

labeled.

To put this into perspective of a speaker, this image shows the rarefaction and

compression of a sound wave from a speaker.

5 Audio Fundamentals Part 1 Christie University

In other words, sound waves, propagating through a medium, create alternating bands

of high and low pressure, known as rarefaction and compression, at a given frequency

and amplitude.

Frequency Frequency of waves is measured in Hz (Hertz), which equals Cycles/ Second.

Typically accepted bandwidth of human auditory system is about 20Hz - 20kHz.

20Hz is a very low deep sub bass.

20kHz is a very high frequency that a lot of people can’t hear and audio systems

have a hard time to reproduce.

A high quality audio system can produce the harmonics found in the 20kHz

range. This can induce an alpha wave state in the brain and a lot of people

have an emotional response to that and recognize whether a song is, for

example recorded or live.

Frequencies below 20Hz are known as infrasonic, above this band known as

ultrasonic.

Lower frequencies have longer wavelengths, represented by the red wave in this

image. Higher frequencies have shorter wavelengths, represented by the other

colors.

6 Audio Fundamentals Part 1 Christie University

Amplitude of Acoustic Sound

Amplitude of acoustic sound waves is measured in dB SPL (decibels Sound

Pressure Level).

One of the most misunderstood concepts in audio is the decibel. A lot people

will say something like "that concert was 130 decibels" or "that jackhammer down

the street is 120 decibels" or "Calibrate your cinema audio system at 85 decibels".

dB (decibel) is a logarithmic ratio, between two values. It is not an absolute

value, unless tied to a reference level.

dB SPL is tied to a specific reference level for sound pressure measured in Pa

(Pascals) with a typically accepted value for threshold of human hearing is: 0dB

SPL @ 1kHz = 20 µPa. This is the bench mark where we agree there is a standard,

just like we agree on what a lumen is.

Here’s the complex logarithmic formula:

Sound Pressure Level (SPL)

This graph shows SPL on the side and frequency across the bottom -> 10Hz which is

infrasonic up to 100 kHz which is ultrasonic.

7 Audio Fundamentals Part 1 Christie University

The curve varies a lot. If you have a very low frequency sound it has to be a pretty high

level for you to even know that it’s there. Our ears are very sensitive to the frequencies

of the human voice at or around 1 kHz. Through evolution we have evolved to be very

sensitive to other peoples voices as a way of communicating. This is why the center

channel is the most important channel in cinema. If dialogue doesn’t sound correct,

people will notice.

Threshold of hearing varies by frequency and individual perception.

Typically accepted value for threshold of pain is: 120 - 130 dB SPL.

Maximum pressure (undistorted) at 1 Atmosphere = 194 dB SPL - you cannot

generate anything louder than this. The exception being a nuclear explosion or a

sonic boom.

Every increase of 3 dB SPL, equals twice the acoustic energy - For example - If

you turn up the volume by 9dB that’s 8x louder.

Amplitude Voltage

Amplitude of electronic audio signals is measured in dB (decibels) tied to a

reference voltage

dBV (used in a lot of consumer equipment)

voltage relative to 1 volt, without consideration

of impedance

dBu (used in professional applications) “u”

stands for unloaded. AC voltage relative to

.775V at any impedance

dBm power relative to 1 millwatt, typically at

600 Ohm impedance

Standard signal reference level for analog pro

audio is +4dBu = 1.23V

Every 6 dB increase in dBV, dBu or dBm, equals

twice the electrical energy

Electronic/ Optical Audio Signals

Acoustic sound waves can be transformed into electronic or optical audio

signals and these signals can be in turn transformed back into acoustic waves,

by using a transducer. A Transducer like a microphone or speaker.

Microphones convert sound

waves into electronic or

optical audio signals

Speakers convert audio

signals from amplifiers into

sound waves.

8 Audio Fundamentals Part 1 Christie University

Analog Audio | Digital Audio

Analog Audio

Analog audio is an electronic audio signal analogous to patterns of acoustic

sound waves

There are five primary signal formats in widespread use:

Speaker: up to 120V, unbalanced, very low impedance

Professional: +4 dBu, balanced, low impedance

Consumer: -10dBV

unbalanced, high

impedance

Instrument: ~-20 dBV,

typically unbalanced, very

high impedance

Microphone: ~-40 dBu to -60

dBu can be balanced or

unbalanced, low or high

impedance

Digital Audio

Digital audio is an electronic audio signal, which utilizes a series of digital

samples, typically with 16 to 24 bit resolution and sample rates between 44.1kHz

(same as a CD) and 96kHz, to very closely represent the original waveform of an

analog signal. Requires A to D (Analog to Digital) conversion for storage and/ or

transmission and D to A conversion for reception and/ or playback.

May or may not utilize audio and/ or data compression schemes

Some common signal formats currently in widespread use:

Atmos | Pulse Code Modulation or PCM, up to 128 channels - 64 is the base

number of channels, 24 bit audio, 48 or 96 kHz

DCI-AES | PCM, up to 16 channels, 20 or24 bit, 48 or 96 kHz

AES3 | PCM, 2 channels in a pair, 16-24

bit, 32 - 96 kHz

SPDIF | PCM or Dolby Digital or DTS,

bandwidth varies

HDMI | PCM or Dolby Digital or DTS,

bandwidth varies

Every effort has been made to ensure the information in this document is accurate and reliable, however in

some cases changes in the specifications may not be reflected in this document. Christie reserves the right to

make changes to specifications at any time without notice.

training.christiedigital.com Audio Fundamentals Part1 v1.0 Sept14