2018 Autumn Me 5241 Engineering Acoustics Course Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 171

RL Harne ME 5241, Eng. Acoust.

2018 The Ohio State University

Course Notes for OSU ME 5241


Engineering Acoustics
Prof. Ryan L. Harne*

Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH 43210, USA
*Email: [email protected]
Last modified: 2018-09-13 15:33

1
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Table of contents

1 Course introduction 6
1.1 Scope of acoustics and answers to "why should we study acoustics?" 6
1.2 Fundamental sound wave propagation phenomena 8
1.3 Waves and sounds 8
1.4 Wave propagation phenomena 9
1.4.1 Wavefronts 9
1.4.2 Interference 9
1.4.3 Reflection 10
1.4.4 Scattering 10
1.4.5 Diffraction 11
1.4.6 Refraction 12
1.4.7 Doppler effect 13
1.5 Further resources for an acoustics introduction 14
2 Introduction to human hearing and influences of noise 15
2.1 Outer ear 15
2.2 Middle ear 15
2.3 Inner ear 17
2.4 Noise and its influence on hearing 18
2.4.1 Influences of ultrasound on human hearing 19
3 Mathematics background survey and review 20
3.1 Mathematical notation 20
3.2 The harmonic oscillator 20
3.3 Initial conditions 21
3.4 Energy of vibration 22
3.5 Complex exponential method of solution to ODEs 23
3.6 Damped oscillations 27
3.7 Harmonically forced oscillations 29

2
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

3.8 Mechanical power 33


3.9 Transfer functions 34
3.10 Linear combinations of simple harmonic oscillations 35
3.11 Further Examples 39
4 Wave equation, propagation, and metrics 41
4.1 One-dimensional wave equation 41
4.1.1 General solution to the one-dimensional wave equation 43
4.2 Harmonic waves 45
4.2.1 Harmonic waves in the complex representation 47
4.3 One-dimensional acoustic wave equation 48
4.3.1 Consolidating the components to derive the one-dimensional acoustic wave equation 52
4.4 Harmonic, plane acoustic waves 55
4.5 Acoustic intensity 57
4.6 Harmonic, spherical acoustic waves 58
4.6.1 Spherical wave acoustic intensity and acoustic power 60
4.7 Comparison between plane and spherical waves 61
4.8 Decibels and sound levels 62
4.8.1 Combining sound pressure levels 65
5 Elementary acoustic sources and sound propagation characteristics 66
5.1 Monopole and point acoustic sources 66
5.2 Sound fields generated by combinations of point sources 69
5.2.1 Directional wave propagation in geometric and acoustic far field acoustic wave radiation72
5.3 Source characteristics 76
5.4 Dipole acoustic sources 76
5.5 Reflection: method of images 78
5.6 Sound power evaluation and measurement 84
5.7 Outdoor sound propagation 89
5.7.1 Attenuation by the atmosphere 89
5.7.2 Attenuation by barriers 90
5.7.3 Total sound attenuation outdoors 90

3
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

6 Acoustics instrumentation, measurement, and evaluation 92


6.1 Microphones 92
6.1.1 Characteristics of microphones 94
6.1.2 Selecting a microphone for the measurement 100
6.2 Sound level meters 101
6.3 Frequency bands 102
6.3.1 Using octave and one-third octave bands in acoustic measurements 103
6.4 Weighting networks 104
6.5 Locations to accurately measure sounds 108
7 Acoustics in and between rooms 111
7.1 The transient sound field in a room 111
7.2 Absorption of acoustic energy in a room 114
7.2.1 Diffuse field sound pressure level 117
7.2.2 Dissipation by fluid losses 118
7.3 Contribution of acoustic energy from the direct and reverberant acoustic fields 119
7.4 Sound transmission through partitions 120
7.4.1 Practical material compositions for sound absorption and blocking 125
7.5 Sound transmission through flexible partitions, panels 126
7.5.1 Influence of mass on transmission loss 129
7.5.2 Coincidence 129
7.6 Sound transmission class, STC 132
7.6.1 Methods to enhance STC 134
7.7 Impact insulation class, IIC 135
7.7.1 Methods to enhance IIC 136
7.8 Flanking 137
8 Applications of acoustics: noise control and psychoacoustics 140
8.1 Engineering noise control 140
8.1.1 Source-path-receiver methodology to engineering noise control 140
8.1.2 Noise exposure 141
8.1.3 Development for and enforcement of noise control criteria 142

4
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

8.1.4 Vehicle noise 143


8.1.5 Speech interference 145
8.1.6 Noise criterion for rooms 147
8.1.7 Community reaction to noise 150
8.1.8 NIHL and occupational noise 151
8.1.9 Source-path-receiver methodology for noise control engineering 153
8.2 Psychoacoustics 154
8.2.1 Binaural hearing 154
8.2.2 Masking 163
8.2.3 The cocktail party effect 164

5
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1 Course introduction

Acoustics is "a science [dealing with] the generation, transmission, and reception of energy as vibrational
waves in matter" [1]. Thus, waves may propagate through gas, liquids, and solids. In this course, we give
attention to waves that propagate through gases and liquids (collectively termed fluids). These waves are
often termed sounds.

Sounds are pressure changes in a fluid medium transmitted from a source, through the medium, to a receiver.

Many sounds result from the vibrations of structures or materials. A few exceptions may be identified, such
as those sounds resulting from implosions, like cavitation, or explosions, like the ignition of combustibles.

In general, for an introductory treatment of acoustics such as will be undertaken in this course only a few
mathematical preliminaries, likely encountered before in a dynamics-related course, are required before
embarking on one's first study in acoustics. We will complete this review in Sec. 3.

1.1 Scope of acoustics and answers to "why should we study acoustics?"

Lindsay's wheel of acoustics is shown in Figure 1. It features the scope of acoustics circa 1964. This scope
and relation among the technical areas is still relevant today. The outer, four fields (Arts, Engineering, Earth
Sciences, Life Sciences) are related to technical subject areas (outer ring) and technical disciplines (inner
ring), which all share underlying physics and fundamental principles (core). The distinction from 1964 to
today is that one may identify new technical disciplines for the inner ring, such as those that are multi-
disciplinary due to emergent trends and fabrication capabilities. Representative additional technical
disciplines may include thermo- and aeroacoustics such as those encountered in various vehicle systems
contexts on land and in air [2] [3], and may include subjects pertaining to microscale acoustics associated
with microfluidics and micro-manipulation practices [4] [5].

The Acoustical Society of America (ASA) has largely emulated the structure of Lindsay's wheel towards
the formation of the Technical Committees that help facilitate society activities and engagements,
http://asa.aip.org/committees.html. In this course, we focus on how these fields of Engineering, the Arts,
Earth Sciences, and Life Sciences utilize acoustical principles. Greater emphasis will be placed on problems
pertaining to Engineering although many of our topics are multi-disciplinary at the core. Our studies will
include subjects of hearing, noise, room acoustics, electroacoustics, sonic and ultrasonic engineering, and
psychoacoustics. To exemplify the importance for these diverse contexts of acoustics, a few examples are
worthwhile.

6
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 1. Lindsay's wheel of acoustics, adapted from R. B. Lindsay, J. Acoust. Soc. Am. 36, 2242 (1964).

Example: Effectively engineering the acoustic qualities of rooms is essential to promote effective speech
intelligibility [6]. In rooms with too little acoustically absorptive surfaces, the reverberation time of speech
will result in adverse reflections for the listener, making it challenging to understand the message. This
phenomenon is exacerbated in the presence of background noise. This course will describe the design and
implementation methods needed to correctly tailor the acoustic qualities of rooms, such as classrooms,
auditoriums, office spaces, concert halls, household spaces, and so forth, in the ways that promote speech
intelligibility and other relevant ergonomic factors.

Example: Humans have binaural hearing, meaning that two ears are used to hear which gives rise to an
ability to more effectively locate sources of sound. Interaural-time and -level differences, ITD and ILD,
respectively, are the principal factors that govern the ability to locate sound sources [7]. The field of
electroacoustics takes advantage of these factors to create virtual sound fields using a minimal number of
acoustic transducers, such as "surround sound" audio playback in movies. This course will describe the
fundamental principles that result in "steered" sound, which is a basis for acoustic signal processing
methods used in advanced electroacoustic systems. This course will also introduce the concepts of human
binaural hearing and the intriguing nuances that found our hearing sense.

A wealth of other examples of applications is available at http://acousticalsociety.org/education_outreach

7
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1.2 Fundamental sound wave propagation phenomena

Before diving in, it is valuable to learn about the basic phenomena of sound wave propagation at a high
level [8]. We will return to certain of these concepts in detail and in mathematical ways later in the course.

1.3 Waves and sounds

Waves in fluids result from the oscillation of fluid particles due to pressure differences on either sides of
the particles. In a significant proportion of applications, these particle oscillations are so insignificantly
small with respect to the length of the acoustic wave (wavelength) that we classify these oscillations as
linear. Thus, linearized analyses of the sound wave propagation are justified. Consequently, studies of many
wave propagation phenomena are greatly eased.

Many sound waves originate from interaction with vibration structural surfaces. The pressure differences
in the fluid may result from the vibration of a structural surface that pushes against the adjacent fluid. For
instance, one can imagine a machine being turned on that "makes noise". The surfaces of the machine that
interface with the fluid are moving, which gives rise to fluid-structure interactions at the surface. The fluid
particles adjacent to the surface will oscillate at the same frequency as the frequency of the machine surface
vibration, according to a continuity of fluid-structure displacement at the interface. Consequently, pressure
differences are created in the fluid adjacent to the machine that radiate away from the machine as acoustic
waves. This fundamental wave behavior is illustrated in Figure 2.

Figure 2. Sound waves created from vibrating surfaces. The colored fluid particles are the same from one time to the next.

The study of vibrations is distinguished from waves by the example shown in Figure 2. Namely, vibrations
are associated with the oscillation or general dynamics of a body, for instance the structural surface of the
machine or an individual fluid particle. On the other hand, waves are associated with both the oscillations
of the body and the resulting spatial transmission of energy in the form of a wave. In Figure 2, the colored
fluid particles are the same from t1 to t2 . These particles, like the neighbors, merely oscillate back and
forth. Yet, due to phase relationships of the oscillations, a wave is generated that transmits energy
(sometimes termed information) through space.

8
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

These are distinct phenomena that do not have counterparts in a study only of vibrations. Vibrations of a
body may be described by oscillations in time, whereas wave propagation involves oscillations of many
bodies through space and time.

1.4 Wave propagation phenomena

1.4.1 Wavefronts
When waves propagate, a wavefront is generated. The wavefront denotes a location of constant phase of
the wave that moves through the fluid. Oftentimes the wavefront is considered as a local maxima of the
wave.

In an open field or volume of fluid, a sudden point source of acoustic pressure will generate a travelling
wavefront that diminishes in amplitude in time due to spherical wave spreading, Figure 3 at left.

For a continuously generated, single frequency wave, a wavelength is identified. This is analogous to the
time length from peak-to-peak of a sinusoidal oscillation signal, but now one considers the spatial length
from peak-to-peak of the wave to define the wavelength. These features are shown at middle and right in
Figure 3.

Figure 3. Propagating wavefronts.

1.4.2 Interference
When wavefronts are created from multiple locations in a volume of fluid, the wavefronts interact when
arriving at the same place at the same time. Depending on the phases that occur at the locations of wave
interaction, different types of interference phenomena will be observed.

Constructive interference occurs when the peaks of the waves combine at the same location.

Destructive interference occurs when the trough from one wave combines with the peak of another wave.

Examples of constructive and destructive interference are shown in https://youtu.be/fjaPGkOX-wo, Figure


4. Two sources of waves are present in the fluid. The total wavefronts generated after a long time has
elapsed are shown. The locations shaded with lightest green or darkest black are where constructive

9
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

interference has occurred. The locations of mid-green shading are where destructive interference has
occurred to effectively blur and greatly diminish the observation of waves.

Figure 4. Snapshot of time series from video https://youtu.be/fjaPGkOX-wo.

1.4.3 Reflection
When a wavefront arrives at a discontinuity in the fluid medium, a portion of the wave is reflected and a
portion of the wave is transmitted. When the discontinuity is a rigid boundary (also termed barrier) all of
the wave energy is reflected. One example of a nearly-rigid boundary is a concrete wall adjacent to an air
volume. Such near-total reflection is shown in the video https://youtu.be/8LrrWvfyqLo around time 3:25.

When the discontinuity is a soft boundary adjacent to a wavefront originally in air, such as sound in air
arriving at a grassy plain surface, the reflected wavefront does not have the same magnitude as the incident
wavefront due to partial transmission of the wave into the adjacent media. In the example of air and a grassy
plain, the adjacent media is the soil which acts a lot like a quasi-fluid due to the granular composition of
soil.

Reflected waves have a phase that depends on the difference in composition from the incident fluid medium
to the second fluid medium of wave impingement. When the second fluid medium is "harder" than the
incident fluid medium, the reflected wave will be in phase with the incident wave. When the second fluid
medium is "softer", the reflected wave will be out of phase.

1.4.4 Scattering
Wave reflections create effective new sources of waves at the reflection interface. For spherical waves
impinging upon a number of fluid discontinuities, the large number of reflections are collectively termed
the scattered waves. Examples of scattering may include reflections of road noise from a sequence of fence
posts or underwater sonar impinging on a school of fish. The total wavefield will be complex due to the
combinations of reflections and the directly radiated sound from source to receiver. Occasionally, sounds
combine with frequency-selective characteristics such that an original incident sound will be spectrally
filtered once the reflection is heard with the directly radiated waves.

10
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1.4.5 Diffraction
When a wavefront arrives at an aperture or a partial rigid barrier, the wavefront will continue to propagate
in a manner that depends on the aperture/barrier dimension with respect to the wavelength.

When the aperture has dimension similar to or smaller than the wavelength, a bending of the waves occurs
at the edges of the aperture. This wave bending phenomena is termed diffraction. When the aperture has a
dimension much greater than the wavelength, the waves are mostly blocked by the aperture perimeter
whereas waves not directly incident on the impediment pass mostly unaffected. These characteristics are
seen in the video https://youtu.be/IoyV2dljw18.

These characteristics have analogs when the waves impinges on a rigid barrier. These scenarios are often
encountered in outdoor acoustics contexts. For barriers, waves with low frequencies and long wavelengths
respecting the barrier thickness easily bend around the barrier. High frequency waves that have wavelengths
less than the barrier thickness are mostly blocked by the barrier whereas partial bending of waves occurs at
the barrier edges.

Consider Figure 5 which provides a schematic of the situation. The rigid barrier creates an acoustic shadow
in some regions opposing an incident plane wave, but diffraction occurs at the edge. The theory of
diffraction is associated with Fresnel and Huygens [9].

Figure 5. Illustration of diffraction at high (at left) and low (at right) frequencies.

As illustrated in Figure 5, the influence of diffraction depends on the wavelength λ of sound with respect
to the barrier thickness. When the wavelength is large with respect to the barrier thickness, there will be
significant diffraction. In the limit of extremely large wavelength with respect to the barrier thickness, it is
as if the barrier is not even there (acoustically) because the pressure is nearly uniform at the front and back
of the barrier. Thus, low frequency sound energy easily transmits around large, rigid barriers because the
wavelength is much greater than the barrier thickness.

When the acoustic wavelength is small with respect to the barrier thickness, then there will be little
diffraction and a significant acoustic shadow will occur behind the barrier. This leads to a blocking effect
behind the barrier. In general, roadside concrete barriers are made as thick as possible since road noise
(often dominated by tire-road interaction) is predominantly at frequencies around 300 Hz and less.

11
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Diffraction is generally observed in all wave physics. The approximate ratio of wavelength λ to source
dimension L at which the trends identified above change is λ / L ≈3/2. Thus, for larger ratios, there is greater
diffraction, and for smaller ratios the shadowing effect may be more influential and diffraction plays a lesser
role.

Interesting illustrations of diffraction in light and water waves, can be found on many websites including
https://en.wikipedia.org/wiki/Diffraction. A close search on Google maps will reveal signs of diffraction,
Figure 6.

Figure 6. Diffraction in ocean waves. Images from Google Maps.

1.4.6 Refraction
When a wavefront arrives at an interface from one fluid medium to another fluid, the transmitted wave into
the adjacent fluid will propagate in a direction that may be shifted from the original path. This alteration of
direction is due to the difference between fluid media properties. This is well known to us in optics because
objects in water appear to be slightly shifted respecting the precise physical position when we view them

12
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

from outside the water in air. The same phenomena happens with sound waves and its relevance is critical
in imaging practices that rely on information from the reflected wave sequences to reconstruct the domain
into which the waves propagated. Thus, refraction is particularly important in practices of medical
ultrasonics and underwater sonar.

1.4.7 Doppler effect


The speed at which acoustic waves travel is much less than the speed of light. This is true whether the fluid
medium is air (sound speed around 343 [m/s]) or water (sound speed around 1500 [m/s]). Thus, when a
sound source generates periodic sounds and moves through the fluid at a rate that is a meaningful proportion
of the sound speed, the received sound signals will be distorted in frequency and amplitude to a listener.

Consider the situation shown in the schematic of Figure 7. The relation for the perceived sound frequency
f (in [cycles/second] = ]Hz]) of a periodic pressure wave occurring with a frequency f 0 is

( c + Vr ) f0 / ( c + V ) where
f= Vr is the relative velocity of the receiver (person in the Figure 7) to the

medium, c is the sound speed of the acoustic fluid, and V is the source relative velocity to the medium.
These relative velocities are given as

Vr , positive if the receiver moves toward the source (and negative if the receiver moves away from the
source)

V , positive if the source moves away from the receiver (and negative if the source moves toward the
receiver)

From this relation, for a fixed receiver location, the frequencies of incoming and outgoing sound sources
directly at the receiver location are respectively heard at higher and lower frequencies by the receiver, with
the new incoming/outgoing frequencies computed by the relation in the paragraph above. But, we do not
ordinarily experience this effect of two distinct frequencies for incoming and outgoing sound sources.
Instead, our experience with the Doppler effect is often a smooth variation in frequency from a low to high
frequency as the sound source passes by, such as a siren from an emergency vehicle. This is because sound
sources do not pass immediately through us as the waves transition from incoming to outgoing; instead
sound sources pass by at a distance. Therefore, the effective source velocity to use in the relation is
Vs ,e = Vs cos θ where θ is the angle between the sound source forward velocity and the line of sight from
the source to the receiver.

It must be emphasized that mean fluid flow does not cause Doppler shift. This is because both source and
receiver exhibit the same deviation of velocity with respect to the fluid medium by virtue of a mean fluid
flow. Thus, wind does not cause Doppler shift.

13
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 7. Illustration of Doppler effect in time and space domain, re-drawn from that shown in [8].

1.5 Further resources for an acoustics introduction


There is a bibliography of key textbooks and sources at the end of these notes. Other web-based acoustics-
relevant resources are provided in the list below.

• The Journal of the Acoustical Society of America http://scitation.aip.org/content/asa/journal/jasa


o JASA is the home of the most eclectic variety of acoustics investigations including animal
bioacoustics, biomedical acoustics, noise, signal processing, structural acoustics and
vibration, to name only a few fields. The bi-annual ASA conferences are always a gathering
of a diverse group of scientists and researchers, which creates a stimulating conference
environment rarely found in other more focused scientific disciplines.
• IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control
http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=58 and IEEE Transactions on
Biomedical Engineering http://ieeexplore.ieee.org /xpl/RecentIssue.jsp?punumber=10
o These journals are the premier outlets in the IEEE scientific communities pertaining to
acoustics, which are significantly focused in the area of ultrasonic engineering for
biomedical practices.
• Applied Acoustics http://www.journals.elsevier.com/applied-acoustics/
• ASME Journal of Vibration and Acoustics
http://vibrationacoustics.asmedigitalcollection.asme.org/journal.aspx
• Journal of Sound and Vibration http://www.journals.elsevier.com/journal-of-sound-and-vibration
• Noise Control Engineering Journal http://ince.publisher.ingentaconnect.com/content/ince/ncej
• Sound and Vision Magazine http://www.soundandvision.com/
• Audioholics Magazine https://www.audioholics.com/

14
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

2 Introduction to human hearing and influences of noise

Humans hear with ears, Figure 8. Ears are intricate neuro-electro-mechanical systems. For those with
greater interest in the topic, a good read on the subject is Ref. [10].

The anatomy of the ear can be classified as components existing in the outer ear, the middle ear, and the
inner ear. In Figure 8, these regions of the ear are shown by the differently colored outlines. The following
Secs. 2.1 to 2.3 provide greater detail on the human hearing sense. In brief, the different contributions of
each part of the human ear are

• Outer ear: sound collection


• Middle ear: impedance matching
• Inner ear: decoding the mechanics and waves to "hearing"

2.1 Outer ear

The first role of the outer ear is to act in a similar, but reversed, manner as megaphones. Namely, the pinnae
(labeled auricles in Figure 8) collect the sound and guide it into the auditory canal. The auditory canal is
around 7 [mm] in diameter and 27 [mm] in length. The shaping of pinnae enhances our high frequency
perception of sound due to more effective reflections of higher frequency sounds into the auditory canal
(see also Sec. 8.2.1.2). Due to resonances associated with the geometry of the "cylindrical waveguide", the
auditory canal provides additional gain on the order of 10 to 15 [dB] in the frequency range of 2 to 6 [kHz].

Ultimately, there is low efficiency of energy transfer from the outside air all the way to the back of the
auditory canal, especially for low frequency sounds.

At the opposite end of the auditory canal as the pinna is the tympanic membrane, often referred to as the
eardrum. It is a literal membrane approximately 50 to 90 [mm2] in area, which is about the size of a pencil
eraser area. Due to acoustic pressure waves that travel through the auditory canal and interact with the
structure of the tympanic membrane, the eardrum vibrates in a way similar to conventional
electromechanical microphones.

2.2 Middle ear

The middle ears consists of several linkage and muscle components within a cavity of volume on the order
of 2 [cm3]. In one role, the middle ear is a transformer of work, similar to electrical transformers. The
ossicles of the middle ear include the malleus, incus, and stapes. These ossicles mostly undergo rigid-body
translations and rotations to transfer the mechanical vibrations of the eardrum to the oval window of the
cochlea in the inner ear. The ossicles act like a gearbox or transmission from the auditory canal side to the
cochlea side.

The cochlear fluid is water-like in impedance, while the acoustic "fluid" on the other side of the tympanic
membrane is often air which has an impedance significantly less than water. There is consequently a large
impedance mismatch between the air in the auditory canal and fluid of the inner ear, which inhibits energy
transfer. Thus to make the transfer of the acoustic energy more efficient into the cochlea, the kinematics of

15
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

the ossicles provide motion amplification mechanisms. Because the middle ear exists in a cavity, an
enclosure-type resonance occurs that for humans is in the range of 4 [kHz]. The consequence is that the
transformer functionalities are uniformly amplified for frequencies.

Yet, the middle ear is not entirely a kinematic mechanism. Muscles governing the motions of the malleus
at the eardrum and stapes at the oval window provide several active control functions. Namely, the muscles
of the middle ear contract in a reflexive action due to sudden, loud sounds as well as due to vocalization
which prevents our ears from being overwhelmed by our own voices when we speak. Also, low frequency
vibrations of the eardrum are amplified by the muscles so as to increase the transformer effect into the
cochlea.

In these ways, the ossicles and muscles of the middle ear help to better match the impedance from the
auditory canal side to the cochlea side of the middle ear via both active and passive mechanisms.

Finally, the middle ear contains the Eustachian tube (labeled auditory tube in Figure 8). This "valve"
compensates for static changes in acoustic pressure on the auditory canal side of the tympanic membrane
by providing a direct passage to the nasal cavity. Thus, the "ear-popping" effect experienced when
undergoing changes in elevation that correspond to change in the static, equilibrium pressure is due to the
Eustachian tube facilitating a pressure equalization between the middle ear and outside environment (which
is the same static pressure in the auditory canal).

Figure 8. Brief anatomy of the human ear [11].

16
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

2.3 Inner ear

The inner ear is the region of the ear where neurology couples with the mechanical and acoustic physics,
thus empowering the hearing sense. The stapes in the middle ear interfaces to the oval window of the
cochlea, which is the direct line of conversion between mechanical energy in the middle ear to the cochlear
fluid pressure fluctuations in the middle ear.

The cochlea is a coiled organ of approximately 34 [mm] length that contains the basilar membrane along
the length dimension. Semicircular canals are also a part of the cochlea. These structures have nothing to
do with the hearing sense and are instead a means for humans to detect accelerations of the head so as to
assist in the process of physical balance and related muscular coordination.

The basilar membrane is effectively shown as the central region of the cochlea in Figure 9. The basilar
membrane is widest at the end termed the helicotrema than at the end connected to the oval window, which
conflicts with the scale of the schematic shown in Figure 9. The motions of the oval window, caused by the
stapes motion, induce cochlear fluid pressure fluctuations and subsequently traveling waves in the basilar
membrane. The distance that such waves travel along the basilar membrane is directly related to the
frequency of the sound: higher frequencies propagate shorter distances while low frequencies travel further
along the coiled membrane.

It is believed that hearing loss at higher sonic frequencies, e.g. >10 [kHz], commonly encountered with
increasing age, is due to greater fatigue of the membrane near to the oval window where all sonic
frequencies cause basilar membrane wave motions [10].

Figure 9. Schematic of the uncoiled cochlea [10].

As the waves travel along the basilar membrane, a location of maximum activation of the membrane is
observed. The location of heightened activation corresponds to the sensed frequency of acoustic pressure
stimulation. This can be conceived of as a local "resonance" along the basilar membrane. Similar to the
wave travel principle along the membrane, higher frequencies more highly stimulate the side of the basilar
membrane nearer to the oval window, while low frequency pressure waves more highly stimulate the side
of the basilar membrane nearer to the helicotrema.

Strictly speaking with the terms of structural dynamics, the basilar membrane is a membrane supported on
an elastic foundation. In this case, the elastic foundation is termed the organ of Corti. This organ directly
interfaces hair cells (visually similar to hairs) between the basilar membrane and tectorial membrane. The

17
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

relative motions of the basilar and tectorial membranes shear the hair cells. Each hair cell is connected to
nerve fibers that fire electrical impulses to the brain in consequence to the hair cell deformation.

Beyond this process, many unknowns still exist regarding the neurological underpinnings of hearing [10].
Nevertheless, sufficient important knowledge is known regarding the inner ear actions and the hearing
sense.

Namely, the basilar membrane is most greatly activated in the frequency range of 20 to 20k [Hz]. This is
the range of frequencies that humans can hear and is termed the sonic range of frequencies to denote that
humans recognize such pressure fluctuations as sound. Acoustic pressure stimuli at frequencies below this
bandwidth are insufficiently amplified in the middle ear or are observable more like DC components
respecting the equalization provided by the auditory tube. Frequencies greater than this range are similarly
filtered by the mechanics of the middle ear and lack of resonance-like regions on the basilar membrane that
may be excited to sufficient extent to "hear" frequencies greater than 20 [kHz]. Also, there are direct
correspondences to heightened resonance-like action of the basilar membrane, to increased shear stress on
local hair cells, to the frequency (termed "pitch" when discussing audiology) that is sensed. Thus, as an
example, hearing low frequencies amounts to low frequency acoustic stimuli that, following passage
through the outer and middle ears, cause traveling waves towards the end of the basilar membrane near the
helicotrema where the membrane is more greatly set to motion at the local level, which gives rise to large
stresses on the local hair cells, which send off neurological impulses to the brain.

From the physics of acoustics, to the structural dynamics of membranes, to the kinematics of linkages, to
the neuro-electro processes associated with biological matter: the human ear is a remarkable system!

2.4 Noise and its influence on hearing

Some sounds are considered to be noise. Noise is sound that is unwanted, either because of its effect on
humans, its effect to fatigue or malfunction of physical equipment, or its interference with the perception
or detection of other sounds [12]. Noise has physiological, psychological, financial, and structural
implications (there may be others!) [13]. To name a very few examples of its importance and impact, noise

• Causes tinnitus (hearing loss)


• Results in increased mental stress and reduced ability for human recovery from illness and injury
• Inhibits communication
• Influences purchasing decisions
• Contributes to structural degradation, wear, fatigue, and failure

The influence of noise on human hearing cannot be overstated. The hair cells that facilitate the mechanical-
neurological energy transfer may be damaged when overstrained to a large extent by large amplitude
oscillation of the organ of Corti. Such large amplitude oscillations occur due to large amplitude acoustic
pressures outside of the human ear. The hair cells are not regenerative, meaning that damage is permanent
and hair cell "death" via significant damage is not able to be naturally remedied by the human body.

18
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Noise-induced hearing loss (NIHL) is thus due to overstimulation of the hair cells and can occur on time
scales ranging from many years to instantaneously, according to the amplitude of the acoustic pressure
fluctuation outside of the ear. For long-time scale hearing loss associated with day-to-day hearing over the
duration of a couple decades, natural hearing loss affects the very high frequency range of human hearing,
such as 14 to 20 [kHz]. Many young adults by mid-20s cannot greatly hear frequencies around 16 [kHz]
and up. The more pervasive NIHL due to explicit noise exposure is in the spoken frequency range of 200
to 4k [Hz], so that communication is specifically inhibited by an inability to hear [14].

These facts have motivated many decades of efforts to develop systems and practices that involve sounds
in the ways that least affect the human hearing sense in negative ways. For instance, creating building
environments that are not unnecessarily reverberant that may cause annoyance, designing reciprocating
motors that radiate the least noise under load, and even design alert sounds that are the most recognizable
to humans as sounds of danger without relying strictly on excessively loud tones [15].

2.4.1 Influences of ultrasound on human hearing


Humans hear pressure waves that fluctuate with frequencies from 20 to 20k [Hz]. Above this bandwidth,
the basilar membrane in the ear does not respond in a resonant-like way as it does for the sonic frequency
range. Thus, frequencies of sound greater than 20 [kHz] are termed ultrasonic.

Thus, it may be asked whether or not human hearing may be damaged from exposure to ultrasound. This
question is especially meaningful because human hearing damage is simply a matter of fatigue to the hair
cells, and because human hearing is lost in the higher frequency ranges at earlier times due to greater fatigue
of the inlet side of the cochlea near the oval window that is excited by all frequencies of sound.

In fact, this is a subject needing greater research attention due to the lack of knowledge on the topic [16] as
well as growing number of commercial, medical, and industrial applications that involve ultrasound. One
aspect to consider is that ultrasound waves are more easily attenuated by in-air travel as well as when
propagating through other media, including the body. This means that hearing damage by ultrasound is
more relevant due to direct line-of-sight between a transducer emitting ultrasonic frequency waves and the
human ear, in contrast to, for instance, an application of ultrasound for medical imaging purposes on other
parts of the human body.

High amplitude ultrasound is reported to cause human discomfort, nausea, and other subjective responses.
This suggests that humans are indeed influenced by the pressure fluctuations despite a lack of neurological
activation that would be called 'hearing' [16].

19
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

3 Mathematics background survey and review

3.1 Mathematical notation

In the course notes, the following mathematical notations will be used consistently.

We will always use j to denote the imaginary number, j= −1 .

Bold mathematics denote complex numbers, for example =


k 8.0 + j 2.3 , d = De jωt . If written out by hand,
we use an underline to denote a complex number, for example =
k 8.0 + j 2.3 , d = De jωt .

,u
Mathematics with overbar denote vectors, for example v = 2.1i + 1.2 j + 2.3k = u x ex + u y ey .

Bold and barred mathematics denote complex vectors, for example=u (u i + u j ) e


x y
jωt
.

3.2 The harmonic oscillator

We review the mathematical foundations of mechanical vibrations analysis that you previously encountered
in a "System Dynamics" course or similar.

Consider Figure 10 which shows the schematic of a mass-spring oscillator having one dimension of motion,
x (SI units [m]), with respect to the fixed ground. There are no gravitational influences to account for in
this context.

The spring exerts a force on the mass m (SI units [kg]) according to the deformation of the spring with
respect to the ground. Assuming that the spring has no undeformed length, the spring force is therefore
f = − sx where the spring stiffness s has units [N/m]. Applying Newton's second law of motion, we
determine the governing equation of motion for the mass

d2x
m = f = − sx
dt 2 (3.2.1)

Rearranging terms and using the notation that that overdot indicates the d / dt operator, we have that

mx + sx =
0 (3.2.2)

Based on the fact that stability conditions require m and s to be >0, we can define a new term ω02 = s / m
such that the governing equation (3.2.2) becomes

x + ω02 x =
 0 (3.2.3)

20
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 10. Mass-spring oscillator.

Solution to the second-order ordinary differential equation (ODE) (3.2.3) determines the displacement x of
the mass for all times. Solving ODEs is commonly accomplished by assuming trial solutions and verifying
the correctness of the assumed solutions. By engineering intuition, such as one's visual conception of a
mass at the end of a Slinky and how that object may move, we hypothesize that a suitable trial solution to
(3.2.3) is

x = A1 cos γ t (3.2.4)

By substitution, we find that (3.2.4) is a solution to (3.2.3) when γ = ω0 . Likewise, we also discover that an
alternative trial solution to (3.2.3) is

x = A2 sin ω0t (3.2.5)

When an ODE has multiple solutions, the total solution is the superposition of the individual solutions.
Thus, the general solution to (3.2.3) is

x ( t ) A1 cos ω0t + A2 sin ω0t


= (3.2.6)

The term ω0 is called the natural angular frequency and it has units [rad/s]. Thus, the mass will exhibit
oscillatory motion at ω0 . There are 2π radians in a cycle, which gives that the frequency in cycles per
second is ω0 / 2π . We refer to this as the natural frequency f 0 = ω0 / 2π which has units [Hz = cycles/s].
In practice, we would physically observe the mass oscillation over a duration of time. The period T of
one oscillation is therefore T = 1 / f 0 [s].

3.3 Initial conditions

To determine the unknown constants A1 and A2 , we need to leverage further knowledge of the situation.
For instance, two initial conditions are required to solve a second-order ODE. Thus, if the mass

displacement at an initial time t = 0 is x (=t 0=) x0 and the initial mass velocity is x (=t 0=) x=
0 u0 we
can determine the constants by substitution.

x0 = A1 cos ω0 0 + A2 sin ω0 0 = A1 (3.3.1)

−ω0 A1 sin ω0 0 + ω0 A2 cos ω0 0 =


u0 = ω0 A2 (3.3.2)

Using this knowledge, the general solution to (3.2.3), and the mass displacement described for all time, is

21
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

u0 (3.3.3)
x ( t ) x0 cos ω0 t +
= sin ω0 t
ω0

Alternatively, we may express these two sinusoidal functions using an amplitude and phase

x ( t ) A cos [ω0t + φ ]
= (3.3.4)

1/2
A  x02 + ( u0 / ω0 ) 
2
where = and tan φ = −u0 / ω0 x0 .
 
The time derivative of displacement is velocity

x ( t ) = −ω0 A sin [ω0t + φ ] =


u (t ) = −U sin [ω0t + φ ] (3.3.5)

where we defined the speed amplitude U = ω0 A . Likewise, the acceleration becomes

−ω0U cos [ω0t + φ ]


a (t ) = (3.3.6)

From these results, we see that the velocity leads the displacement by 90°, and that the acceleration is 180°
out-of-phase with the displacement, Figure 11. This response is fully dependent upon the initial conditions
of displacement and velocity. Such response is referred to as the free response or free vibration.
50
10x displacement [m]
40
velocity [m/s]
30 acceleration [m/s 2]
20

10
response

-10

-20

-30

-40

-50
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
time [s]

Figure 11. Free vibration response of mass-spring oscillator.

3.4 Energy of vibration

The potential energy associated with deforming a spring is the integration of the spring force across the
path of deformation.
x 1 2
=Ep ∫=
0
sxdx
2
sx (3.4.1)

Here, we assume our integration constant is zero, which means that our minimum of potential energy Ep
is zero. If we substitute (3.3.4) into (3.4.1), we find that

22
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1 2
=Ep sA cos 2 [ω0t + φ ] (3.4.2)
2

By definition, the kinetic energy of the mass is

1 2 1 2
Ek
= =mx mu (3.4.3)
2 2

Likewise, by substitution of (3.3.5) into (3.4.3) we have

1
=Ek mU 2 sin 2 [ω0t + φ ] (3.4.4)
2

The total energy of this dynamic system is the summation of potential and kinetic energies

1 1 1
E = E p + Ek = mω02 A2 = mU 2 = sA2 (3.4.5)
2 2 2

where we have used the definition ω02 = s / m and the identity sin 2 α + cos 2 α =
1.

The total energy (3.4.5) is independent of time, which is evidence of the conservation of energy. The total
energy is equal to both the peak elastic potential energy (when kinetic energy is zero) and the peak kinetic
energy (when potential energy is zero). Figure 12 illustrates the instantaneous exchange between potential
and kinetic energies over the course of the mass-spring system oscillation.
30
total energy
25
potential energy
20 kinetic energy
energy [J]

15

10

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
time [s]

Figure 12. Exchange of energy between potential and kinetic forms as mass-spring oscillates.

3.5 Complex exponential method of solution to ODEs

A useful technique in harmonic analysis of engineering problems is the complex exponential form of

solution. In this course, we will use the engineering notation e jωt where the imaginary number is j= −1
. This notation is in contrast to the notation used in physics and mathematics e − jωt . When confronted with
a derivation performed according to the physics notation, to convert to the engineering notation requires
taking the complex conjugate. In other words, one replaces the j by − j everywhere to recover the
engineering notation. Table 1 consolidates the primary acoustics textbook references which use the complex
exponential notations. One observes a similar number of books in the engineering "camp" as in the

23
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

"physics/math" camp. Note that the underlying principles of acoustics are derived from physics, Figure 1,
so this result should not be surprising.
Table 1. Primary acoustics textbook usage of the complex exponential notation.

engineering e jωt notation [1] [17] [9] [18] [19]

physics/math e − jωt notation [20] [21] [22] [23] [24]

Recalling the equation (3.2.6), the general solution method assumes

x ( t ) = Ae γt (3.5.1)

where we use the boldface type to represent complex numbers, with the exception of the imaginary number

j= −1 . By substituting (3.5.1) into (3.2.3), one finds that γ 2 = −ω02 and thus γ = ± jω0 . Since two
solutions are obtained, the general solution to (3.2.3) by the complex exponential notation is the
superposition of both terms associated with γ = ± jω0

x ( t ) A1e jω0t + A 2 e− jω0t


= (3.5.2)

Recall the initial conditions, x ( 0 ) = x0 and x ( 0=) x=


0 u0 . By substitution we find

x=
0 A1 + A 2 (3.5.3)

=u0 jω0 A1 − jω0 A 2 (3.5.4)

from which we find

1 u0  1 u0 
A1
= A2
 x0 − j  , and=  x0 + j  (3.5.5)
2 ω0  2 ω0 

We see that the unknown constants A1 and A 2 are complex conjugates. Substituting (3.5.5) into (3.5.2)
e ± jθ cos θ ± j sin θ , we find
and using Euler's identity =

1  u  1  u  u
x =   x0 − j 0   e jω0t +   x0 + j 0   e − jω0t = x0 cos ω0t + 0 sin ω0t (3.5.6)
2
  ω 
0  2
  ω 
0  ω 0

The result in (3.5.6) is the same as (3.3.3). Thus, although we used a complex number representation of the
assumed solution (3.5.1), satisfying the initial conditions which are both real entailed that the complex
components of the response were eliminated. In general, there is no need to perform this elimination
process, because the real part of the complex solution is itself the complete general solution of the real
differential equation. For example, we could have alternatively assumed only

24
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

x ( t ) = Ae jω0t (3.5.7)

Given that, in general, A= a + jb , we have that

Re [ x ] a cos ω0t − b sin ω0t


= (3.5.8)

Then we can satisfy the initial conditions

x0 = a0 (3.5.9)

u0 = −ω0 b (3.5.10)

Thus, a = x0 and b = −u0 / ω0 . Then by substitution of a and b into (3.5.8), we again arrive at (3.5.6)
confirming the conclusion that the real part of the complex solution is the complete general solution.

We will use similar assumed solution forms as (3.5.2) throughout this course. According to the relation
between displacement, velocity, and acceleration, we summarize

=u ω0 Ae jω0t
j= jω0 x (3.5.11)

−ω02 Ae jω0t =
a= −ω02 x (3.5.12)

The term e jω t is a unit phasor that rotates in the complex plane, while A is a complex function that
0

modifies the amplitude of the rotating amplitude and shifts it in phase according to the complex component

of A . Thus, =
A a 2 + b 2 and tan φ = b / a such that we have A cos [ω0t + φ ] =
Re  A e (
j ω t +φ )
0  . Figure
 
13 illustrates how the complex exponential form representation may be considered in the complex plane.
The magnitude-scaled phasor rotates in the plane with angular rate ω0 as time changes. The Real
contribution of the phasor oscillates between positive and negative values. By considering (3.5.11), the
velocity phasor leads the phasor rotation of displacement by 90° while the acceleration phasor (3.5.12) is
out-of-phase with the displacement phasor by 180°

Figure 13. Physical representation of a phasor.

25
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Example: Consider the harmonic response shown in Figure 11. Determine the sinusoidal x ( t ) and complex

phasor x ( t ) representations of the oscillation displacement. Plot the complex phasor representation on a
real and imaginary coordinate plane for t = 1 / 8 [s]. Confirm that the real part of the complex phasor is the
same as the sinusoidal representation.

Answer: The amplitude of the displacement is approximately 1.2 (note the 10x scaling for visualization

purposes), thus, A = 1.2 [m]. The frequency of the oscillation is 1 cycle per second, thus ω0 = 2π (1)
[rad/s]. The initial velocity u0 is approximately -2 [m/s]. Using (3.5.9) and (3.5.10), we find that a = 1.2

and b =− ( −2 ) / 2π =1/ π . Therefore, we=


find that φ tan
= −1
[1/1.2π ] 14.8 ° or 0.259 [rad]. With these
components, we can express the two representations of the sinusoid

=x ( t ) 1.2cos [ 2π t + 0.259] (3.5.13)

x ( t ) = 1.2e [
j 2π t + 0.259]
(3.5.14)

A plot of the phasor representation of this harmonic oscillation at the time t = 1 / 8 [s] is shown in Figure
14.

The complex phasor representation is expanded by Euler's identity to yield

( t ) 1.2e [ = 1.2cos [ 2π t + 0.259] + j1.2sin [ 2π t + 0.259]


j 2π t + 0.259]
x= (3.5.15)

Re  x ( t )  1.2cos [ 2π t + 0.259] which is the same as (3.5.14), thus verifying


Thus, the real part of this is =
the equivalence.
1
x(t)=1.2*exp[j*(2*pi*1/8+0.259)]
0.9

0.8

0.7
imaginary axis

0.6

0.5

0.4

0.3

0.2

0.1

0
-0.2 0 0.2 0.4 0.6 0.8
real axis

Figure 14. Phasor representation.

Table 2. MATLAB code to generate Figure 14.

A=1.2;
phi=14.8*pi/180;
omega_0=2*pi;

26
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

t=1/8;
x=1.2*exp(j*(2*pi*t+phi));
figure(1);
clf;
plot([0 abs(x)*cos(2*pi*t+phi)],[0 abs(x)*sin(2*pi*t+phi)],'-r');
xlabel('real axis');
ylabel('imaginary axis')
axis equal
legend('x(t)=1.2*exp[j*(2*pi*1/8+0.259)]','location','northwest');

3.6 Damped oscillations

All real systems are subjected to phenomena that dissipate kinetic energy as time elapses. In general, these
damping phenomena decrease the amplitude of free oscillations as time increases. Unlike forces
characterized with potential energy and inertial forces associated with kinetic energy, damping forces are
often identified empirically, meaning that sequences of tests are conducted to study the rate at which energy
decays in the system according to changes in the damping element's parameters. A common form of
damping observed is viscous damping, which exerts a force proportional to the velocity of the mass away
from its equilibrium, Figure 15.

dx
f r = − Rm (3.6.1)
dt

where the damping constant Rm has a positive value and SI unit [N.s/m]. We refer to this damping constant
as the mechanical resistance. Typical dampers use turbulent phenomena of fluids or gasses being forced
through orifices and channels to induce dissipative effects proportional to the damper extensional velocity.

Figure 15. Damped harmonic oscillator schematic.

Considering the damped mass-spring oscillator schematic of Figure 15, we again apply Newton's laws to
yield

mx
 + Rm x + sx =
0 (3.6.2)

By incorporating our definition of the natural angular frequency, we express

Rm
x+
 x + ω02 x =
0 (3.6.3)
m

To solve this equation, we resort to the complex exponential solution method assuming that

x ( t ) = Ae γt (3.6.4)

By substituting (3.6.4) into (3.6.3), we find

27
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

 2 Rm 2
(3.6.5)
 γ + m γ + ω0  Ae =
jγt
0
 

The only non-trivial solution to (3.6.5) requires that the terms in brackets be equal to zero. Thus

γ =− β ± ( β 2 − ω02 )
1/ 2
(3.6.6)

where we have introduced β = Rm / 2m which has units of [1/s]. Often, the damping is small such that
ω0 >> β . We can then consider rearranging the radicand by defining the damped natural angular
frequency

ωd
= ω02 − β 2 (3.6.7)

such that the term γ is given by

γ =− β ± jωd (3.6.8)

Considering the other common conventions of denoting damping, we recognize that

Rm
= ζω0 (3.6.9)
2m

ω d ω0 1 − ζ 2 .
where ζ is termed the damping ratio. From this, we recognize that β = ζω0 , and=

Substituting (3.6.8) into (3.6.4), the general solution to the damping mass-spring oscillator free vibration
problem is

=x e − β t  A1e jωd t + A 2 e − jωd t  (3.6.10)

As described and demonstrated previously, only the real part of (3.6.10) is the complete, general solution
to (3.6.3). Thus, we can express (3.6.10) as

=x ( t ) Ae− β t cos [ωd t + φ ] (3.6.11)

As before, the constants A and φ are determined by applying initial conditions to (3.6.11) and its first
derivative.

Unlike the undamped harmonic oscillator considered in 3.2, the vibrations of the damped oscillator decay
in amplitude in time by virtue of the term e − β t . To assess the rate at which this decay occurs, we recognize
that the term β has units of [1/s]. Thus, we can therefore define a relaxation time τ = 1 / β that
characterizes the decay of the oscillation amplitude as time increases.

28
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1.5

1
displacement [m]

0.5

-0.5

-1
0 1 2 3 4 5 6 7 8
time [s]

Figure 16. Damped free oscillation. β / ω0 = 0.1 , and the same additional parameters as those to generate Figure 11.

3.7 Harmonically forced oscillations

In contrast with free vibration, forced oscillations are those induced by externally applied forces f (t ) ,
Figure 17. This modifies the governing equation (3.6.2) to yield

mx + Rm x + sx =f (t ) (3.7.1)

For linear response by the harmonic oscillator, the total response is the summation of the individual
responses. Thus, by recalling Fourier's theorem that any periodic function may be described using an infinite
series of sinusoids (even when the period extends for an infinitely great duration), we consider that the force
f ( t ) is composed according to f ( t ) = ∑ f i ( t ) where each fi ( t ) accounts for one of the harmonic,
i

sinusoidal components of force. Based on this, by linear superposition we only need to solve for the
response of the oscillator when subjected to harmonic forces occurring at single frequencies. Once we solve

for the individual displacement responses xi ( t ) due to the respective harmonic forces fi ( t ) , we then obtain
the total response by the superposition of the individual displacement responses.

Figure 17. Schematic of forced damped harmonic oscillator.

Therefore, in this course we will often consider harmonic driving inputs at single frequencies, such as
f ( t ) = F cos ωt . When excited by such a force from a state accounting for unique initial conditions of
displacement and velocity, the oscillator will undergo two responses: transient response associated with the
initial conditions and a steady-state response associated with the periodic forcing function.

The steady-state response is the solution to the ODE (3.7.1) and it considers the initial conditions to be
zero-valued during the solution process.

29
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

The transient response is the solution to the ODE (3.6.2) accounting for the initial conditions and zero-
valued forcing function. For systems with even a small amount of damping, the transient response decays
as time increases. At times sufficiently greater than the relaxation time τ = 1 / β , the transient response is
insignificantly small when compared to the steady-state response induced by the forcing function.

In many engineering contexts, and often in acoustical engineering, we will be concerned with the steady-
state response. In these cases, the complex exponential solution approach to the ODE (3.7.1) will be

beneficial to ease the mathematical operations. Consider that the real driving force f ( t ) = F cos ωt is

replaced with the complex driving force f ( t ) = Fe jωt . Then, the equation (3.7.1) becomes

x + Rm x
m Fe jωt
 + sx = (3.7.2)

Since the real part of the driving force represents the actual driving force, similarly, the real part of the

complex displacement Re [ x] that solves the equation (3.7.2) represents the actual displacement in resulting
from the force.

To proceed with the solution, we assume x = Ae jωt and by substitution into (3.7.2) we have

 −ω 2 m + jω Rm + s  Ae jωt =
Fe jωt (3.7.3)

In general, the application of the above approach assumes time-harmonic response. Such assumption does
not rely upon forced excitation, as will be seen in the analysis of acoustic systems. The use of the complex
exponential solution form is simply one way to satisfy the ODE.

We then solve for the complex displacement coefficient A and substitute that back into the assumed
solution form to determine the complex displacement

Fe jωt 1 Fe jωt
=x = (3.7.4)
( s − ω 2 m ) + jω Rm jω Rm + j (ω m − s / ω )

Similar to the case of free vibration for (3.5.11), the complex steady-state velocity of the mass is u = jω x

Fe jωt
u= (3.7.5)
Rm + j (ω m − s / ω )

We now introduce the complex mechanical input impedance Z m

Z=
m Rm + jX m (3.7.6)

where we define the mechanical reactance X=


m ω m − s / ω . The magnitude of the mechanical impedance
Z m = Z m e jΘ is

1/2
Z m =  Rm2 + (ω m − s / ω ) 
2
(3.7.7)
 

30
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

while the corresponding phase angle is found from

X m ωm − s / ω (3.7.8)
tan=
Θ =
Rm Rm

Considering (3.7.6), the dimensions of the mechanical impedance are the same as the mechanical resistance,
[N.s/m].

Considering (3.7.5), we see that the mechanical impedance is equal to the ratio of the driving force to the
harmonic response velocity

f
Zm = (3.7.9)
u

From (3.7.9), we see that the complex mechanical impedance Z m is the ratio of the complex driving force
f to the complex velocity u of the system. The interpretation of (3.7.9) is important.
• First, note that (3.7.9) is a transfer function between f and u in the frequency domain.
• Second, for large mechanical input impedance magnitudes, (3.7.9) indicates that large force is
required to achieve a given system velocity, all other factors remaining the same. Whereas in
contrast, for small mechanical impedance magnitudes it is relatively easy to apply the harmonic
driving force to obtain considerable system velocity in oscillation.
• It is also important to recognize that the mechanical input impedance can be measured by collocated
force transducer and velocity transducer on the mechanical system. Because the transfer function
of mechanical input impedance is in the frequency domain, it can be computed by taking the ratio
of the Fast Fourier transforms of the force and velocity measurements.
• The real part of the mechanical input impedance, the resistance, is associated with damping or
dissipative effects. For waves, the resistance is better referred to as a measure of energy transfer
since the energy is uniformly moved from one location to another, similar to a damping
phenomenon near a sound source.
• The imaginary part of the mechanical input impedance, the reactance, is associated with reciprocal
energy exchange due to its close connection to mass and stiffness, and hence kinetic and potential
energies, respectively.

Thus knowing the mechanical input impedance, one may compute the complex velocity for a different
harmonic forcing function according to u = f / Z m . By virtue of the assumed solution form x = Ae jωt , the
complex displacement becomes x = f / jω Z m . In other words, determining Z m is analogous to solving the
differential equation of motion for the linear system.

As a result, the actual displacement is the real part

Re [ x ]= Re [f / jω Z m ]= x= ( F / ω Z m ) sin [ωt − Θ] (3.7.10)

whereas the actual speed is

31
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Re [u ]= u= ( F / Z m ) cos [ωt − Θ] (3.7.11)

From (3.7.11), for a constant amplitude of the harmonic force, it is seen that the speed of the mass is
maximized when the impedance magnitude is minimized. This occurs when

ω m − s / ω =0 → ω = s / m =ω0 (3.7.12)

In other words, when the harmonic excitation frequency corresponds to the natural frequency, the speed of
the mass oscillation will be maximized. This phenomenon is called resonance. Thus, at resonance, the
impedance is minimized and purely real, such that the speed is

F jωt and F (3.7.13)


u≈ e = u Re
= [u ] cos ωt
Rm Rm

As seen in (3.7.13), for a given harmonic force amplitude, when driven at resonance the system response is
damping- or resistance-controlled. In other words, the damping is the principal determinant for the
amplitude of the system velocity at frequencies close to resonance.

When the excitation frequency is significantly less than the natural angular frequency ω << ω0 , we write

ω Z m / m= ω Rm / m + j ω 2 − ω02 
ω Z m / m ≈ ω Rm / m + j  −ω02  (3.7.14)
Z m ≈ Rm + j [ − s / ω ] → Z m ≈ − js / ω

Thus, when the excitation frequency is much less than the natural frequency, the mass velocity is

jω F jωt
u≈ e (3.7.15)
s

from which we see that the complex displacement is

F jωt
x≈ e (3.7.16)
s

yielding that the actual mass displacement is

F
=x Re
= [ x] cos ωt (3.7.17)
s

This shows that at excitation frequencies considerably less than the natural frequency, the system response
is stiffness-controlled.

Similarly, for high frequencies, ω >> ω0 , this routine will yield that the complex acceleration is

F jωt
a≈ e (3.7.18)
m

given a real acceleration response of

32
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

F
=a Re
= [a ] cos ωt (3.7.19)
m

which shows that at high excitation frequency the system response is mass-controlled.

Summarizing the findings from the above derivations:

• Around resonance ω ≈ ω0 the system is damping- or resistance-controlled and the velocity is


independent of frequency, although the band of frequencies around which this occurs is typically
narrow for lightly damped structures in many engineering contexts
• For harmonic excitation frequencies significantly below the natural angular frequency ω << ω0 , the
system is stiffness-controlled and the displacement is independent of the excitation frequency
• For excitation frequencies much greater than the natural angular frequency ω >> ω0 , the system is
mass-controlled and the acceleration is independent of the excitation frequency

These results are summarized by the example plot shown in Figure 18.
2
10

0
10
amplitude

-2 velocity [m/s]
10
displacement [m]
acceleration [m/s]

-4
10
-2 -1 0 1 2
10 10 10 10 10
frequency [Hz]

Figure 18. Harmonic force excitation of damped mass-spring oscillator. Rm =0.377 [N.s/m], m =1 [kg], s =39.5 [N/m], F
=1 [N].

3.8 Mechanical power


In acoustical engineering applications, the power and energy associated with sound sources and radiation
are critical factors to assess in the determination of acoustic performance and quality. To first introduce
these concepts, we consider the power relations for the mechanical oscillator.

The instantaneous mechanical power Π i delivered to the harmonic oscillator is determined by the product
of the instantaneous driving force and the corresponding instantaneous speed

F2 (3.8.1)
=Πi ωt cos [ωt − Θ ] Re [f ] Re [u ]
cos=
Zm

In general, the average power Π delivered to the system is the more relevant engineering quantity to
consider. The average power is the instantaneous power averaged over one cycle of oscillation

33
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1 T ω F2
{cos ωt cos [ωt − Θ]} dt
2π / ω

T ∫0 ∫
=
Π Π=i dt
2π Z m 0

ω F2
{cos ωt cos Θ + cos ωt sin ωt sin Θ} dt
2π / ω
∫ (3.8.2)
2

2π Z m 0

F2
= cos Θ
2Z m

Recalling (3.7.8),

X m ω m − s / ω sin Θ (3.8.3)
tan=
Θ = =
Rm Rm cos Θ

which shows that cos Θ = Rm / Z m and leads to the result that the average power delivered to the mechanical
oscillator is

Rm F 2 (3.8.4)
Π=
2 Z m2

The units of power, whether instantaneous or average, are Watts, [W]. When the oscillator is driven at
resonance ω = ω0 , the average power becomes Π =F 2 / 2 Rm which shows that the peak average power
delivered to the oscillator is also damping-controlled.

3.9 Transfer functions

The mechanical input impedance is a transfer function between the complex driving force and the steady-
state complex velocity of the oscillator. Similar transfer functions will be encountered throughout this
course on acoustics.

In the general case of complex numbers, such as x1 = A1e jφ 1 and x 2 = A2 e jφ , the transfer function
2

computation involves the multiplication or division of the complex numbers.

Example: Determine and plot the transfer function x1 / x 2 for the oscillations shown in Figure 19 that
vibrate with the same frequency ω.
Answer: The amplitude x=
1 A1 ≈ 1.6 [m], while the amplitude x=
2 A2 ≈ 0.94 [m]. Consider that the
phase of x1 is φ1 =0. In this light, x 2 leads x1 by about 0.2 [s] which is one-fifth of the oscillation period,
T = 2π / ω [s] where the oscillation frequency is approximately f =1 [Hz] yielding that the angular
oscillation frequency is ω = 2π [rad/s]. To recognize that x 2 leads x1 , as opposed to the opposite case,
consider that the peak of x 2 occurs at time 0.8 [s] while another 0.2 [s] elapses until the peak of x1 occurs.
Thus, the phase of x 2 relative to x1 is determined with respect to one period of the oscillation.

1 2π
φ2 2=
= π [rad/s]
5 5

34
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University


j
Thus,
= x1 1.6
= e jφ e jωt 1.6
= e jωt and x 2 0.94
= e j (φ +φ ) e jωt 0.94e
1 1 2 5
e jωt . Therefore, the transfer function
x1 / x 2 is


x1 1.6 −j
= = 2π
1.7e 5
x2 j
0.94e 5

=
Putting this into real and imaginary components, x1 / x 2 1.7cos [ 2π / 5] − j1.7sin [ 2π / 5] . This transfer
function result is plotted below. Note that this vector rotates in the complex plane as time elapses via e jωt
.
2
x1
x2
1
displacement [m]

-1

-2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
time [s]

Figure 19. Harmonic oscillations.

0
imaginary axis

-0.5

-1

-1.5
-2 -1 0 1 2 3
real axis

Figure 20. Transfer function of the two harmonic oscillations.

3.10 Linear combinations of simple harmonic oscillations

In numerous vibration and acoustics contexts, it is often needed to determine the amplitude of response
associated with a combination of individual responses. We consider only linear systems here, so the
principle of linear superposition applies.

Consider two harmonic oscillations at the same angular frequency ω . These two harmonic oscillations are
termed coherent or correlated due to the sharing of frequency.

x1 = A1e j (ωt +φ1 ) and x 2 = A2 e j (ωt +φ2 ) (3.10.1)

The linear combination x= x1 + x 2 is therefore

j ( ω t +φ )
Ae= (Ae 1
jφ1
+ A2 e jφ2 ) e jωt (3.10.2)

35
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Determining the magnitude A and the total phase φ is accomplished trigonometrically from considering
representative phasors in the complex number coordinate plane, Figure 21
1/2
A = ( A1 cos φ1 + A2 cos φ2 ) + ( A1 sin φ1 + A2 sin φ2 ) 
2 2
(3.10.3)
 
A1 sin φ1 + A2 sin φ2 (3.10.4)
tan φ =
A1 cos φ1 + A2 cos φ2

Figure 21. Phasor combination of x


= x1 + x 2 .

The real response displacement is

x = Re [ x1 ] + Re [ x 2 ] = A cos [ωt + φ ] (3.10.5)

As will be seen throughout later portions of this course, many response metrics of interest in acoustics
involve mean-square and root-mean-square quantities. The mean-square is computed by
2π / ω
ω
∫ ( A cos [ωt + φ ]) dt
2
=x12 1 1
2π 0
2π / ω
(3.10.6)
ω
= ∫ A cos [ωt + φ1 ]dt
1
2 2

2π 0

Keep in mind that the name "mean-square" indicates the operators occur to the expression from the right-
to-left: thus, square the expression first and secondly take the mean. Using trigonometric identities on
(3.10.6) yields
2π / ω
ω 1  1 2
∫ A12  + cos 2ωt dt=
2
x =
1 A1 (3.10.7)
2π 0 2  2

The root-mean-square (RMS) is the square root of the mean-square quantity, and is typically denoted by
xrms . Likewise, operate on the expression from the right-to-left: square the expression, take the mean, and
then take the square root of the mean result. Thus, continuing the derivation from (3.10.7), it is seen that
the RMS oscillation displacement is

36
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

A1
x1, rms = (3.10.8)
2

so as to relate the response amplitude A1 to the RMS value of the harmonic component x1,rms via

2 x1,rms = A1 .

From the perspective of complex exponential representations (3.10.1), the RMS of the oscillation is likewise
the amplitude of the response divided by the square-root of 2.

For the summation of two harmonic oscillations having the same frequency (3.10.5), the mean-square is

found by steps. First, the square Re [ x ] = x is


2 2

=x 2 A12 cos 2 [ωt + φ1 ] + A22 cos 2 [ωt + φ2 ] + 2 A1 A2 cos [ωt + φ1 ] cos [ωt + φ2 ]
1 1 (3.10.9)
= A12 (1 + cos [ 2ωt + 2φ1 ]) + A22 (1 + cos [ 2ωt + 2φ2 ]) + A1 A2 ( cos [ 2ωt + φ1 + φ2 ] + cos [φ1 − φ2 ])
2 2
where trigonometric identities are used to transition from the first to second line of (3.10.9). Then, we
compute the mean-square x 2 and find

1 2 1 2
x2 = A1 + A2 + A1 A2 cos [φ1 − φ2 ]
2 2 (3.10.10)
= x1 + x22 + 2 x1 x2 cos [φ1 − φ2 ]
2

Thus, for harmonic oscillations occurring at the same frequency ω , a significant variation in the total
response may occur as relates to the mean-square output. For instance, if A=
0 A=
1 A2 and if φ=
1 φ=
2 0,
1 2 1 2
then the mean-square displacement is x 2 = A0 + A0 + A02 = 2 A02 so as to quadruple the mean-square
2 2
1 2
result with respect to an individual harmonic oscillation: A0 . On the contrary, if the A=
0 A=
1 A2 and
2
1 2 1 2
φ2 = φ1 + π = 0 , then we find A0 + A0 + A02 cos [ −π ] = A02 − A02 = 0 which shows that out-of-
x2 =
2 2
phase oscillations destructively interfere so as to eliminate the mean-square measure. Of course, this occurs
for direct oscillation summation, as well, but this also confirms that the mean-square quantity likewise is
eliminated.

Again adopting the perspective of complex exponential representations (3.10.1), the RMS can be computed
from comparable steps. The results of (3.10.1) and (3.10.2) have indeed already yielded the bulk of the
derivation:

x= ( A1e jφ1 + A2e jφ2 ) e jωt =


A1e j (ωt +φ1 ) + A2 e j (ωt +φ2 ) = Ae j (ωt +φ ) (3.10.11)

37
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

where A and φ are defined in (3.10.3) and (3.10.4), respectively. Considering what was shown in (3.10.6)
- (3.10.8), the RMS of x is

A
xrms = (3.10.12)
2

One may perceive a contradiction between (3.10.12) and the example cases above that demonstrated that
the RMS of a summation of harmonic oscillations at the same frequency can potentially constructively or
destructively interfere. Yet, recalling (3.10.1), the amplitude in (3.10.12) varies according to such phase
differences between the oscillations. Therefore, for instance in the event of perfect destructive interference
by waves of the same amplitude but out-of-phase, we find A → 0 which was shown above.

Note that in general, x = Ae jωt where the contribution of the phase is included within the complex
2
A
amplitude A . Thus, xrms
2
= .
2

Two oscillations that occur at different frequencies are termed incoherent or uncorrelated. When the
oscillations do not occur at the same frequency, there is no further simplification to adopt for the linear
combination.

x = x1 + x2 = A1 cos [ω1t + φ1 ] + A2 cos [ω2t + φ2 ] (3.10.13)

For the mean-square quantity, the computation shows

x2
= x12 + x22 (3.10.14)

Also, the RMS quantities are the summation of the individual terms
2
xrms
= x1,2rms + x2,2 rms (3.10.15)

The same results for the mean-square and RMS quantities would be obtained by applying the complex
exponential form of the response.

Finally, by linear superposition, the procedures outlined above for two harmonic oscillations extend to any
number of oscillations. In particular, for coherent or correlated sinusoid summation, it is found that
1/2
A ( ∑ An cos φn ) + ( ∑ An sin φn ) 
2 2
= (3.10.16)
 

tan φ =
∑ A sin φ
n n
(3.10.17)
∑ A cosφ
n n

Thereafter, following the computation of (3.10.16), the mean-square and RMS values follow naturally from
computations of (3.10.7) and (3.10.12), respectively.

38
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

For incoherent or uncorrelated sinusoid summation, there are no general simplifications available to
employ for the resulting oscillation itself, although the mean-square and the RMS quantities are respectively
computed from

x 2 = ∑ xn2 (3.10.18)

2
xrms = ∑ xn2,rms (3.10.19)

3.11 Further Examples

Problem: Plot the average mechanical power in [W] associated with a harmonically driven oscillator where
F0
is f (ω ) F=
the complex force = (ω ) e jωt e jωt with F0 =1.4 [N.s2], and the oscillator characteristic
ω2
parameters are m =150 g, s =400 [N/m], and Rm =(0.03,3) [N.s/m]. Between the two cases of mechanical
resistance (damping), remark upon a comparison of this result to the case in which the amplitude of the

complex force is independent of the harmonic excitation frequency, i.e. f (ω ) = F0 e jωt . Use F0 =1.4 [N].

Answer: As evident by the derivation from (3.8.2), the relation (3.8.4) holds whether the amplitude of the
complex force is independent or dependent on the excitation frequency since time is averaged and not
F
frequency. Thus, in (3.8.4), for the frequency-dependent force amplitude, one replaces F → . Thus,
ω2
using the code in the Table 3 below, the plots below are generated. It is observed in general that the rate of
power delivery to the oscillator above the natural frequency
= f 0 ω0 / 2π ≈ 8.2 [Hz] is substantially less for
the case of the excitation force with frequency-dependent characteristics than for the force with amplitude
that is frequency-independent. The frequency-independent excitation force is most efficient at delivering
power to the oscillator at resonance while this is not necessarily the case for when the force is frequency-
dependent. For instance, for the frequency-dependent excitation force, the significance of the damping
determines whether or not the power delivery around resonance is substantially greater than the power
provided to the oscillator at frequencies nearby resonance (such as within one order-of-magnitude).

39
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Rm=3 [N.s/m]
Rm=0.03 [N.s/m] 0
5 10
10

0
10

-5
10
-5
10

power [W]
power [W]

-10
10
-10
10

-15
10

frequency dependent complex force amplitude frequency dependent complex force amplitude
frequency independent complex force amplitude frequency independent complex force amplitude
-20 -15
10 10
0 1 0 1
10 10 10 10
frequency [Hz] frequency [Hz]

Table 3. MATLAB code used to generate the plots above.

F_0=1.4; % [N] complex force magnitude constant


m=0.15; % [kg] mass of oscillator
s=400; % [N/m] stiffness of oscillator
R_m=0.03; % [N.s/m] mechanical resistance (damping) of oscillator
omega_0=sqrt(s/m); % [rad/s] natural angular frequency
omega=omega_0*logspace(-1,1,901); % [rad/s] range of excitation frequency
F1=F_0./omega.^2; % [N] amplitude of complex force, frequency dependent
F2=F_0; % [N] amplitude of complex force, frequency independent
Z_m=(R_m^2+(m*omega-s./omega).^2).^(1/2); % [N.s/m] mechanical input impedance
power1=R_m/2*F1.^2./Z_m.^2; %[W] power with frequency dependent force
power2=R_m/2*F2.^2./Z_m.^2; %[W] power with frequency independent force
figure(1);
clf;
loglog(omega/2/pi,power1,'-k',omega/2/pi,power2,'--r');
xlim([min(omega) max(omega)]/2/pi);
xlabel('frequency [Hz]');
ylabel('power [W]');
legend('frequency dependent complex force amplitude','frequency independent complex force amplitude','location','south');
title(['R_m=' num2str(R_m) ' [N.s/m]']);

40
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

4 Wave equation, propagation, and metrics

4.1 One-dimensional wave equation

Low frequency, one-dimensional wave propagation through many media -- soil, metals, human tissue, and
so on -- when the media are long in one dimension respecting the wavelength, is studied by a first
approximation considering the media to be a one-dimensional rod.

Waves in matter are distinct respecting vibrations of lumped-parameter systems. The wave equation
characterizes the propagation of oscillation energy through space without net mass transport. In mechanical
vibrations, the energy is associated with the displacements and velocities of a lumped mass: thus, "where it
goes" is directly related to the energy quantity. Waves transmit energy in a way that does not require the
mass to "go somewhere". Such a unique and simple physics is what sets acoustics apart from vibrations.

We first derive the wave equation for the one-dimensional, longitudinal deformations of a rod (also called
"bar") because it is a valuable step to establish methods required to derive the acoustic wave equation.

Consider the undeformed and deformed rod differential element dx in Figure 22(a). We assume that the
rod in which the element belongs has an infinite length (i.e. reflections at ends are neglected). The rod has
constant properties including cross-sectional area S [m2], Young's modulus Y [N/m2], and density per unit
volume ρ [kg/m3]. The rod is subjected to longitudinal forces that produce longitudinal displacements

ξ ( x, t ) . For narrow rods with respect to the rod length, these displacements are effectively the same
throughout a given cross-section; in other words, we assume that the deformations ξ ( x, t ) are uniform over
a given cross-sectional area of the rod. The location of the differential element at time t =0 is shown in the
top of Figure 22(a) while the bottom of Figure 22(a) illustrates how the element is deformed following the
elapse of time t .

Figure 22. (a) Deformed one-dimensional rod. (b) Free-body diagram of the differential element. (c) Schematic of the
approximation of the deformation.

41
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

The free-body diagram of the differential element is shown in Figure 22(b). The forces on the faces of the
cross-sectional areas are determined via Hooke's law. Consider the "left" face of the differential element
first, we find

F ( x , t ) = Sσ ( x , t ) (4.1.1)

where σ ( x, t ) is the stress on a given face, and we have adopted the convention that the stress is positive
in compression and negative in tension. Stress is related to the material strain via the constitutive relation

σ ( x, t ) = −Y ε ( x, t ) (4.1.2)

In (4.1.2), the positive compressive stress is a result of a negative strain; because the Young's modulus Y
is positive, the right-hand side of (4.1.2) is likewise a positive value. Although this convention is the reverse
of that used by materials scientists, it is convention in the context of studying acoustics because positive
pressure changes correspond to decreases in the volume of a fluid.

Finally, we use the strain-displacement relation for this compressive stress

∂ξ ( x, t )
ε ( x, t ) = (4.1.3)
∂x

All together, the force on the differential element face is found to be

∂ξ ( x, t )
F ( x, t ) = − SY (4.1.4)
∂x

Equation (4.1.4) is Hooke's law for the rod. The force on the "right" face of the differential element is

∂ξ ( x + dx, t )
F ( x + dx, t ) =
− SY (4.1.5)
∂x

A Taylor series approximation for the displacement is taken to yield

∂ξ ( x, t )
ξ ( x + dx, t ) ≈ ξ ( x, t ) + dx (4.1.6)
∂x

This approximation is illustrated in Figure 22(c). Of course, for small deformations from the original
element, the approximation is reasonably accurate. For all purposes in this course, we will consider such
small amplitude perturbations from an equilibrium which form the underpinnings of a significant proportion
of topics in acoustics. By using (4.1.6), we find that (4.1.5) is written

∂  ∂ξ ( x, t )  ∂ξ ( x, t ) ∂ 2ξ ( x, t )
F ( x + dx, t ) ≈ − SY  ( )
ξ x , t + dx  = − SY − SY dx (4.1.7)
∂x  ∂x  ∂x ∂x 2

The average inertial force of the differential element is shown in Figure 22(c) and is

42
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

∂ 2ξ ( x, t )
ρS dx (4.1.8)
∂t 2

Using Newton's second law, we obtain the governing equation of motion for the rod

∂ 2ξ ( x, t )
F ( x, t ) − F ( x + dx, t ) =
ρS dx (4.1.9)
∂t 2

∂ξ ( x, t )  ∂ξ ( x, t ) ∂ 2ξ ( x, t )  ∂ 2ξ ( x, t )
− SY −  − SY − SY dx  ρ
= S dx (4.1.10)
∂x  ∂x ∂x 2  ∂t 2

∂ 2ξ ( x, t ) ∂ 2ξ ( x, t )
SY = ρS (4.1.11)
∂x 2 ∂t 2

Equation (4.1.11) is then put into a new form

∂ 2ξ ( x, t ) 1 ∂ ξ ( x, t )
2

2
= 2
(4.1.12)
∂x c ∂t 2

Equation (4.1.12) is the one-dimensional wave equation. The term c is the phase speed and, as will be
shown below, it is the speed at which the wave propagates in the rod,

c= Y /ρ (4.1.13)

In SI units, the phase speed is [m/s]. The wave equation (4.1.12) is a partial differential equation (PDE)
unlike the ODEs dealt with in mechanical vibrations. As a result, and intuitively, the "oscillations"
associated with the rod deformation vary in time and in space (along the rod length).

4.1.1 General solution to the one-dimensional wave equation


The general solution to the wave equation (4.1.12) was derived by mathematician Jean le Rond d'Alembert
in 1747. The proof of this solution is outside the scope of this course (interested individuals should see
[25]), so we will only focus on its consequences in realizing wave phenomena according to its satisfaction
of (4.1.12). Consider a solution to (4.1.12) of the general form

ξ ( x, t=) ξ1 ( ct − x ) + ξ 2 ( ct + x ) (4.1.1.1)

where the terms in the parentheses are the arguments of functions ξ1 and ξ 2 (like θ is the argument of
sin θ ), as opposed to multipliers. These two functions can be arbitrary [25]. We investigate this general
solution (4.1.1.1) by considering the result of a single function ξ1 at different times, t1 and t2 , Figure 23.

Because the wave shape remains constant, we find that ξ1 ( x1 , t1 ) = ξ1 ( x2 , t2 ) . As a result, the arguments of
the functions must result in the same outcome. Consider that the wave at x1 at time t1 is the same as the
wave response at x2 at time t2 . Thus, the wave function arguments between these two times must yield the
same result

43
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

x2 − x1 (4.1.1.2)
ct1 − x1 = ct2 − x2 → c =
t2 − t1

As a result, it is clear that the parameter c is the speed at which a wave propagates. It is termed the phase
speed because this is the rate at which a point of the wave having constant phase travels. The latter will
become more apparent when we consider harmonic waves. Occasionally, for longitudinal vibrations, c is
referred to as a wave speed.

If we repeat the analysis for ξ 2 ( ct + x ) , we find that in increasing time the wave travels "to the left". Thus,
comparing the arguments ct − x and ct + x , we see that waves either travel to the right ( ct − x ) with
increasing values of x having constant phase, or waves travel to the left ( ct + x ) with decreasing values of
x having constant phase. As shown in (4.1.1.1), the general solution to the wave equation (4.1.12) is the
summation of both of these individual functions.

Figure 23. Wave function propagating in time.

Example: Determine the phase speeds for aluminum, steel, and titanium rods. Use the material properties
Table 4. For the steel rod, what would the density need to be for the phase speed to equal the speed of light?
Describe an example of why achieving such a property may be advantageous for an engineering system.
Table 4. Material properties

aluminum steel titanium

ρ [kg/m3] 2800 7800 4500

Y [N/m2] 69 × 109 200 × 109 116 × 109

Answer: The phase speeds are found to be

69 × 109
=
Aluminum: c = 4964 [m/s]
2800

200 × 109
Steel: c
= = 5064 [m/s]
7800

44
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

116 × 109
=
Titanium: c = 5077 [m/s]
4500

For the steel rod to possess a phase speed (for wave propagation) equal to the speed of light ~ 300 [Mm/s],
the density would need to be 200 × 109 / ( 300 × 106 ) =
2
2.22 [μg/m3]. One conceivable application for an

engineering material that could transmit elastic longitudinal waves at the speed of light which is the "rate"
at which electronics work. Thus, one could make a mechanical breadboard with such a material.

4.2 Harmonic waves

To explore harmonic waves, we consider a semi-infinite rod subjected to a harmonic force with frequency
ω and amplitude F . Figure 24 shows a schematic of the system. Because the rod is infinite, x [0, ∞ ] ,
=

and because the harmonic force f ( t ) = F cos ωt is at x = 0 , there are no means for negative-x traveling
waves to occur. Thus, in the problem only positive-x traveling waves are considered.

Recall the one-dimensional wave equation (4.1.12) expressed again here for convenience

∂ 2ξ ( x, t ) 1 ∂ ξ ( x, t )
2

= (4.2.1)
∂x 2 c2 ∂t 2

At the boundary x = 0 , the harmonic force is applied such that the force on the rod is

∂ξ ( 0,t )
f ( t ) = − SY (4.2.2)
∂x

Based on the excitation force, we assume a space- and time-dependent solution of the form

ξ ( x, t=) A sin  k ( ct − x )  (4.2.3)

Figure 24. Semi-infinite rod with harmonic force at one end.

The assumed solution (4.2.3) has the argument ct − x and thus should satisfy the wave equation. Here, the
unknowns are A and k . By substitution of (4.2.3) into (4.2.1), we indeed find

−c 2 A sin  k ( ct − x )  =−c 2 A sin  k ( ct − x )  (4.2.4)

45
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Thus, (4.2.3) satisfies (4.2.1). However, to determine the unknowns A and k , we must apply the boundary
condition at the end of the rod with the applied force. Likewise, by substituting (4.2.3) into (4.2.2), we find

F cos ωt = SYAk cos [ kct ] (4.2.5)

For (4.2.5) to be satisfied at all times, two properties must be established. First, the amplitude is found to
be

F
A= (4.2.6)
SYk

Secondly, the cosine arguments must be the same, yielding

ω
k= (4.2.7)
c

where k is termed the wavenumber and has units [1/m]. As a result, we express the solution to (4.2.1) using
(4.2.3) with the constants (4.2.6) and (4.2.7) substituted, to give

F
ξ ( x, t )
= sin [ωt − kx ] (4.2.8)
SYk

We now consider the solution (4.2.8) to the wave equation (4.2.1) in detail. For example, the sine argument
repeats every 2π . Thus, for a fixed location along the rod

ξ ( 0,= t ] A sin [ωt + 2=


t ) A sin [ω= π ] A sin [ωt + ωT ] (4.2.9)

where we have introduced the term T to indicate the period of oscillation. From (4.2.9), it is apparent that
T = 2π / ω which gives that the period has units [s].
Similarly, for a fixed time, we have

=) A sin [ kx
ξ ( x,0 =] A sin [ kx + 2=
π ] A sin [ kx + k λ ] (4.2.10)

where we have introduced the term λ which is the wavelength. The wavelength is the spatial duration
between successive repeating phases, alternatively considered to be successive peaks of the deformation
amplitude when considering sinusoidal deformations. From (4.2.10), we find that λ = 2π / k , and thus the
wavelength has units [m]. Figure 25 shows a plot of the longitudinal deformation waves passing through a
rod over the duration of 20 micro-seconds [μs].

46
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 25. Snap shots of wave shape of longitudinal rod vibrations varying across three distinct time increments.

In acoustical engineering contexts, the wavenumber and wavelength are repeatedly used in the design and
assessment of acoustic performance and functionality. It is important to emphasize the relations and
meanings of wavelength and wavenumber.

The wavelength λ is the spatial duration of a repeating harmonic wave shape, and for harmonic
deformations is easily identified as the distance between peaks.

The wavenumber k is the amount of phase change that occurs per unit distance.

The equation relations are summarized as

λ = 2π / k ; k = ω =
/ c ; λ 2=
πc /ω c / f (4.2.11)

As evident from the summary of (4.2.11), long wavelengths λ (large wavelength values) correspond to low
frequencies f (small frequency values) for the same phase speed c , which is only a function of material
properties (4.1.13). Small wavenumbers k also correspond to low frequencies. It is important to keep these
basic trending behaviors in mind in many acoustical engineering contexts.

4.2.1 Harmonic waves in the complex representation


Consider now that the harmonic force is given by

f ( t ) = Fe jωt (4.2.1.1)

We then assume a time-harmonic response solution form to the wave equation (4.2.1) of

ξ ( x, t ) = Ae j (ωt − kx ) (4.2.1.2)

Specifically, (4.2.1.2) is time- and space-harmonic since the wavenumber/wavelength relationships


correspond to periodic spatial change of wave characteristics, such as here the longitudinal rod vibrations.

By substitution of (4.2.1.1) and (4.2.1.2) into (4.2.2), we find

Fe jωt = SYAjke jωt (4.2.1.3)

such that

47
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

F F (4.2.1.4)
A= = −j
jSYk SYk

Upon substitution of (4.2.1.4) into (4.2.1.2), we determine

F j (ωt − kx )
ξ ( x, t ) = − j e (4.2.1.5)
SYk

Given that the complex response (4.2.1.5) is excited by the real part of the input force (4.2.1.1), F cos ωt
, which is the actual input force, then the real part of (4.2.1.5) that corresponds to the actual deformation is
found from

F F F
ξ ( x, t ) =
−j cos [ωt − kx ] + j sin [ωt − kx ] = sin [ωt − kx ] − j cos [ωt − kx ] (4.2.1.6)
SYk SYk SYk

F
Re ξ ( =
x, t )  ξ =
( x, t ) sin [ωt − kx ] (4.2.1.7)
SYk

where it is clear that (4.2.1.7) agrees with the result obtained via assuming the sinusoidal form of response
(4.2.8).

Example: Find the complex mechanical input impedance, Z ( t ) = f ( t ) / ξ ( 0, t ) for the rod subjected to the
complex force given in (4.2.1.1).

Answer: Using the result (4.2.1.5) and the input force (4.2.1.1), we find

F SYk
Z (t )
= =
jω F ω
−j
SYk

which can be simplified using

SYk 1 2 1
(t )
Z= = SY= S ρ c= S ρc
ω c c

We observe that the mechanical input impedance for the harmonically driven longitudinal deformations of
the rod is purely resistive meaning that energy is only removed from the rod. This is intuitive because the
rod is infinitely long and input energy at x = 0 will only propagate as waves away from the driving force.

4.3 One-dimensional acoustic wave equation

Towards deriving the acoustic wave equation and its subsequent evaluation to characterize acoustical
performance and qualities of numerous engineering systems and applications, we must first define several
variables and constants. In the following, we assume that the fluid is lossless and inviscid meaning that
viscous forces are negligible, and assume that the fluid undergoes small (linear), relative displacements
between adjacent particles prior to a pressure change.

48
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

These fluid particles imply an infinitesimal volume of fluid large enough to contain millions of fluid
molecules so that within a given small fluid element the element may be considered as a uniform,
continuous medium having constant acoustic variables (defined below) throughout the element.

Of course, at smaller and smaller scales, the individual fluid molecules that make up the fluid element are
in constant random motion at velocities far in excess of the particle velocities associated with acoustic wave
propagation. Thus within any given fluid element, these molecules may in fact leave the element in an
infinitesimal duration of time. Yet, such molecules leaving are replaced by other molecules entering the
element. The consequence is that evaluating the relatively slow motions associated with wave propagation
does not need to account for the very small time-scale dynamics of the individual fluid molecules.

Because we consider only linear relative displacements between the fluid particles, our theoretical
development is limited to accurate treatment of linear acoustic phenomena. The same applies to the
consideration of pressure/density fluctuations which are very small with respect to ambient pressure/density
fluctuations. While focused on linear contexts, these linear acoustic phenomena are the core of a significant
proportion of all acoustic events in air-borne and water-borne acoustic applications. Thus, omitting
nonlinear acoustic wave propagation characteristics in our analysis only inhibits evaluation of a small
number of applications, that include shock wave propagation and extremely high intensity acoustic waves.

Acoustic pressure p ( x, t ) is the pressure fluctuation around the equilibrium (atmospheric) pressure P0 :

( x, t ) P ( x, t ) − P0 . The SI units of pressure are Pascals [Pa=N/m2]. The atmospheric pressure is


p= P0 ≈
100 [kPa] near sea level. Here, x is the spatial dimension, limited currently to one dimension. The fluid
∂ξ ( x, t ) 
particle displacement is ξ ( x, t ) while the fluid particle velocity is =
u ( x, t ) = ξ ( x, t ) . The
∂t

equilibrium (atmospheric) fluid density is ρ 0 , while the instantaneous fluid density is ρ ( x, t ) .

Figure 26. Schematic of one-dimensional pipe with pressure variation due to piston motion at x=0.

To derive the acoustic wave equation, three important components are required:

• the equation of state


• the equation of continuity

49
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

• the Euler's equation

Equation of state. The fluid must be compressible to yield changes in pressure. Therefore, the
thermodynamic behavior of the fluid must be considered. Assuming an ideal gas is our fluid, the pressure
is a function of density and for small pressure changes this can be expressed via a Taylor series

∂P 1 ∂2 P
( ρ − ρ0 ) + ( ρ − ρ0 )
2
P =P0 + + ... (4.3.1)
∂ρ ρ0
2 ∂ρ 2 ρ0

Because the pressure and density fluctuations within the fluid element associated with a significant
proportion of acoustic sound pressure levels are so small, the terms in (4.3.1) of order 2 and greater are
insignificantly small, thus

∂P
P ≈ P0 + ( ρ − ρ0 ) (4.3.2)
∂ρ ρ0

which is the linear approximation of the pressure change from ambient condition. From (4.3.2) we rearrange
to yield

 ∂P   ρ − ρ 
P − P0 ≈  ρ0 
0
 (4.3.3)
 ∂ρ ρ
 0 
ρ0 

s
The term on the right-hand side of (4.3.3) in brackets [ ] is called the condensation = ( ρ − ρ0 ) / ρ0 . The
condensation is the relative deviation of fluid element density from a reference value. The term on the right-
hand side in the curly brackets { } is the adiabatic bulk modulus B . Finally, as already defined, the term
on the left-hand side is the acoustic pressure.

p = Bs (4.3.4)

Equation (4.3.4) is the equation of state. It may be thought of as "Hooke's law for fluids", since it relates a
fluid "stress" (pressure) to the fluid "strain" (condensation) through the fluid "stiffness" (bulk modulus). A
particular mathematical difference from the solid mechanics analogy is that (4.3.4) is a scalar equation since
pressure has no directionality within a differential element.

Equation of continuity. Within an unchanging, defined volume of space, pressure changes cause mass to
flow in and out of the volume. By conservation of mass, the net rate with which mass flows into the volume
through the enclosing surfaces of the volume must be equal to the rate with which the mass increases within
the volume. Figure 26(d) illustrates the scenario. A constant volume element of a duct is enclosed by two
constant cross-sectional areas S spanning a differential length dx . The mass flow rate in the left-most
volume surface is

S ρu (4.3.5)

50
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

while the mass leaving the volume on the right-most volume surface is approximated by a Taylor series
expansion to be

 ∂ ( ρu ) 
S  ρu + dx  (4.3.6)
 ∂x 

The net rate of mass influx is therefore

 ∂ ( ρu )  ∂ ( ρu )
S ρu − S  ρu + dx  =
−S dx (4.3.7)
 ∂x  ∂x

The rate at which the mass in the control volume changes is equal to

∂ρ
S dx (4.3.8)
∂t

Thus, by conservation of mass, we obtain the equation of continuity (4.3.10)

∂ ( ρu ) ∂ρ
−S dx =
S dx (4.3.9)
∂x ∂t

∂ρ ∂ ( ρ u )
+ 0
= (4.3.10)
∂t ∂x

Thus, by (4.3.9), the time rate of change of mass in the control volume is equal to the net influx of mass in
the same time duration. Extended to three dimensions, the equation of continuity becomes

∂ρ
+ ∇ ⋅ ( ρu ) =0 (4.3.11)
∂t

where ∇ ⋅ is the divergence operator (see Appendix A7 of [1] for a review of how differential operators
act on scalars and vectors) and the overbar on the particle velocity represents the fact that it is a vector
quantity in three dimensions: u = u x x + u y y + u z z where x , y , and z are the unit vectors, respectively.

For small (linear) changes of acoustic pressure, the linearized equation of continuity (4.3.10) is

∂s ∂u
+ 0
= (4.3.12)
∂t ∂x

Equation (4.3.12) is the linear continuity equation in one dimension. Recall that the continuity equation is
developed from a conservation of mass within a defined volume of space. In three dimensions the equation
is

∂s
+ ∇ ⋅ u =0 (4.3.13)
∂t

The condensation s is the relative difference between the instantaneous fluid density and the equilibrium
density

51
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

ρ − ρ0 (4.3.14)
s=
ρ0

Euler's equation. Unlike for the continuity equation, we now focus on a fluid element that is deformed in
consequence to the pressure differences between the left-most and right-most faces, see Figure 26(a,b,c).

The pressure on the left-most face is P ( x, t ) while, by a Taylor series approximation, the pressure on the
∂P ( x, t )
right-most face is P ( x, t ) + dx . Thus, the force difference between left and right faces is
∂x

 ∂P ( x, t )  ∂P ( x, t )
P ( x, t ) −  P ( x, t ) + dx  =
− dx (4.3.15)
 ∂x  ∂x

This force accelerates the fluid within the changing volume, such that an application of Newton's second
law (a force balance) yields

∂P ( x, t ) ∂ 2ξ ( x, t )
−S S ρ0
dx = dx (4.3.16)
∂x ∂t 2

∂P ∂u
− ρ0
= (4.3.17)
∂x ∂t

where we have simplified the notation in (4.3.17) by using the particle velocity rather than particle

displacement and have dropped the repeated arguments ( x, t ) for conciseness. In addition, the acceleration
of the fluid is linearized in (4.3.16) and (4.3.17) [1], which is the relevant range of fluid dynamics
considered in many applications of acoustics. Noting that P
= P0 + p and that ∂P0 / ∂x =0 , equation
(4.3.17) may be simplified

∂p ∂u
− ρ0
= (4.3.18)
∂x ∂t

Equation (4.3.18) is the linear Euler's equation. In three dimensions, the derivation yields

∂u
−∇p = ρ0 (4.3.19)
∂t

4.3.1 Consolidating the components to derive the one-dimensional acoustic wave equation
To summarize for one-dimensional acoustic waves

equation of state p = Bs (4.3.1.1)

∂s ∂u
equation of continuity + 0
= (4.3.1.2)
∂t ∂x

∂p ∂u
Euler's equation − ρ0
= (4.3.1.3)
∂x ∂t

52
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

First, we take the spatial derivative of (4.3.1.3)

∂2 p ∂ 2u
− 2
ρ0
= (4.3.1.4)
∂x ∂x∂t

Second, we take the time derivative of (4.3.1.2) and multiply by the equilibrium density

∂2s ∂ 2u
ρ0 = − ρ 0
(4.3.1.5)
∂t 2 ∂x∂t

Then, we add (4.3.1.4) and (4.3.1.5) to yield

∂2 p ∂2s
= ρ 0
(4.3.1.6)
∂x 2 ∂t 2

Using (4.3.1.1), we have s = p / B and by substitution into (4.3.1.6), we obtain the one-dimensional
acoustic wave equation (4.3.1.8) for plane waves in the x-axis

∂ 2 p ρ0 ∂ 2 p
= (4.3.1.7)
∂x 2 B ∂t 2

∂2 p 1 ∂2 p
= (4.3.1.8)
∂x 2 c 2 ∂t 2

where the sound speed is c = B / ρ 0 . Plane waves indicate that the wave travels strictly in one direction
and there are no pressure variations through a cross-section that is perpendicular to the axis of wave
propagation. In three dimensions, the acoustic wave equation is

1 ∂2 p
∇2 p = (4.3.1.9)
c 2 ∂t 2

The operator ∇ 2 is the Laplacian operator and is unique with respect to the coordinate system under
consideration, see Appendix A7 of [1]. Figure 27 illustrates the differences between plane wave and
cylindrical wave propagation. To summarize, plane waves are one-dimensional waves with no mean change
in acoustic pressure through the cross-section normal to the wave propagation axis, and these waves are
governed by (4.3.1.8). Cylindrical waves and spherical waves exhibit a spreading effect away from the
sound source and are governed by (4.3.1.9) according to the appropriate Laplacian operator. The equation
relevant to plane waves, (4.3.1.8), is merely a limiting case of (4.3.1.9) by simplification of the Laplacian
operator in a Cartesian coordinate system.

As shown in Figure 27, although the fluid particles individually oscillate with a particle velocity (a few
examples are shown as red dots), the transmission of the wave is not subject to the uniform transmission of
fluid particles. This is the significant difference between wave propagation in acoustics and the vibrations
of mechanical systems.

53
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 27. Differences between plane wave propagation (top) and cylindrical wave propagation (bottom) shown for
different times ti . Made using Mathematica notebook http://library.wolfram.com/infocenter/MathSource/780/. Also see
http://www.acs.psu.edu/drussell/Demos/waves/wavemotion.html

To summarize the developments and assumptions, the equations (4.3.1.8) and (4.3.1.9) here derived are the
linear (small pressure changes), lossless (inviscid) acoustic wave equation in one and three dimensions,
respectively.

For ideal gases undergoing adiabatic compression, the bulk modulus is

∂P
=B ρ=
0 γ P0 (4.3.1.10)
∂ρ ρ0

where γ is the ratio of specific heats which is specific to a given gas. For air, γ = 1.40 [dim]. In general
P0 / ρ 0 is almost independent of pressure so that the sound speed is primarily a function of temperature.
Thus, an alternative derivation of the sound speed for air yields

=c γ rT
= K c0 1 + TC / 273 (4.3.1.11)

where r is the specific gas constant (for air, r =287 [J/kg.K]), TC is the temperature in degrees [°C]. c0
is the sound speed in air at 0 [°C] which is approximately c0 =331.5 [m/s] at 1 [atm] pressure (sea level)
and at 0 [°C]. Appendix A10 of [1] provides a table of material properties for common gases. In particular,
the sound speed in air at sea level and at room temperature is approximately 343 [m/s].

54
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

4.4 Harmonic, plane acoustic waves

The one-dimensional acoustic wave equation for plane waves propagating in the x-axis (4.3.1.8) is similar
to the one-dimensional wave equation determined for longitudinal harmonic deformations of the rod (4.2.1).
Plane waves imply that no change of acoustic pressure occurs in a cross-section perpendicular to the
direction of wave propagation. Indeed, for plane waves, all of the acoustic variables -- the acoustic pressure,
particle velocity, and condensation -- are constant in the cross-section of the domain that is perpendicular
to the direction of wave propagation.

As for waves of longitudinal harmonic deformation in the rod, we can anticipate that the complex
exponential form of harmonic solution that satisfied (4.2.1) will also satisfy (4.3.1.8).

Therefore, we assume

p ( x, t ) Ae j (ωt − kx ) + Be j (ωt + kx )
= (4.4.1)

The particle velocity is determined from Euler's equation (4.3.1.3)

∂p ∂u 1 ∂p 1
− ρ 0 → u ( x, t ) =
= − ∫  − jkAe j (ωt − kx ) + jkBe j (ωt + kx )  dt
− ∫ dt =
∂x ∂t ρ0 ∂x ρ0
(4.4.2)
jk  1  j (ωt − kx )
u ( x, t ) =
−  − Ae j (ωt − kx ) + Be j (ωt + kx )  = Ae − Be j (ωt + kx ) 
jωρ0 ρ0 c 

The second line of (4.4.2) exemplifies the time-harmonic form of the Euler's equation for plane waves,
which is written −∂= ωρ 0u jk ρ 0 cu .
p / ∂x j=

Using that k = ω / c and expressing p + = Ae j (ωt − kx ) and p − = Be j (ωt + kx ) , the particle velocity components
in the positive + and negative - directions are found to be

u ± = ±p ± / ρ 0 c (4.4.3)

Breaking (4.4.3) up, we see u + = p + / ρ0 c while u − = −p − / ρ0 c .

The specific acoustic impedance is the ratio of acoustic pressure to fluid particle velocity and, like for
mechanical vibrations, is a measure of resistance and reactance of the fluid media to inhibit or assist the
propagation of waves:

z=p/u (4.4.4)

Therefore, for plane waves,

p (4.4.5)
z= = ± ρ0 c
±p ± / ρ 0 c

the specific acoustic impedance is purely real and dependent upon the direction of wave travel. The
interpretation of a purely real impedance for traveling waves is that energy is transferred without return
from a previous source or origin. Impedances with imaginary components indicate reciprocating energy.

55
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

This description is in agreement with the damped harmonic oscillator impedance in Sec. 3.7 where
imaginary impedance components, the reactance, are associated with the inertial and spring forces, whereas
the real impedance components, the resistance, is associated only with the damping forces.

For air at 20 [°C], the specific acoustic impedance is ρ 0 c =415 [Pa.s/m]. These units are sometimes referred
to as rayls in honor of Lord Rayleigh, formerly John William Strutt. Rayleigh's "The Theory of Sound"
from 1894 [26] is a remarkable treatise on acoustics. The book is in the public domain since the copyright
has expired, so it is a worthy download https://en.wikipedia.org/wiki/John_William_Strutt,_3rd_Baron_Rayleigh

Because the product of atmospheric density and sound speed equals the impedance, the combination is
typically more important in acoustics applications than the individual values alone.

Let's extend the plane wave analysis to forced excitations of air in a duct. Consider a duct of infinite length,
filled with air, and harmonically excited by a rigid piston on one end, Figure 28. The piston moves with a

displacement defined by d ( t ) = De jωt where D is the harmonic displacement amplitude. Because the duct
has infinite length, plane waves only propagate in the +x direction. Thus

p ( x, t ) = Ae j (ωt − kx ) (4.4.6)

We are interested to determine how "loud" the pressure waves will be (i.e. pressure amplitude) in
consequence to a known displacement amplitude D of the piston.

To determine this, we must apply the boundary condition that indicates that air particle velocity at the piston
surface must be equal to the piston velocity (presuming that no cavitation and shocks occur in the duct).

We therefore use Euler's equation (4.3.1.3), knowing that u ( 0, t ) = jω De jωt ,

∂p ( 0, t ) ∂u ( 0, t )
− ρ0
= (4.4.7)
∂x ∂t

ρ 0ω 2
− ( − jk ) Ae j (ωt=
)
ρ 0 ( −ω 2 ) De jωt → A= j D= jωρ 0 cD (4.4.8)
k

Therefore, the acoustic pressure in the duct is

Re p ( x, t )  = Re  jωρ 0 cDe j (ωt − kx )  = −ωρ 0 cD sin [ωt − kx ] (4.4.9)

56
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 28. Plane waves in a duct excited by a piston.

Example: What is the amplitude of the acoustic pressure in a duct with an end-firing piston harmonically
oscillating with amplitude of displacement D =20 [μm] and at a frequency f =120 [Hz]? Assume the air
temperature is 20 [°C].

Answer: The amplitude of the acoustic pressure is p ( x, t ) = ωρ 0 cD . Using the provided values, and seeing
above that the specific acoustic impedance ρ 0 c of air at 20 [°C] is 415 [Pa.s/m], we find that

( 2π 120 )( 415) ( 20 × 10−6 ) =


6.26 [Pa].

4.5 Acoustic intensity

As observed in (4.4.5) by the lack of imaginary terms, acoustic pressure and the particle velocity are in
phase for plane waves. As a result, acoustic power is transmitted.

The instantaneous intensity I ( t ) = pu of an acoustic wave is the rate per unit area at which work is done
by one element of the acoustic fluid on an adjacent fluid element. Note that in general, the particle velocity

is a vector u and thus the intensity is also vector quantity. The units of I ( t ) are [W/m2]. The average

acoustic intensity I is the time average of I (t )


T T
1 1
(t ) T ∫ Re [p ] Re [=
u ] dt
T ∫0
=I I= pu T
= pudt (4.5.1)
T0

where, for harmonic waves with angular frequency ω, the period is T = 2π / ω , and the integration
considers the real components of pressure and particle velocity.

In general throughout this course, unless otherwise specified, use of the term acoustic intensity refers to the
time-averaged version (4.5.1) and not the instantaneous measure of intensity.

57
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

For plane waves, the intensity is one-dimensional so that we drop the overbar, thus recognizing that "travel"
in the opposing direction is indicated by a change of sign:

p2
=I = ρ0 c u 2 (4.5.2)
ρ0 c

Using the relations for mean-square, RMS, and amplitude, the intensity may be written as
2
prms P2 (4.5.3)
=I prms u=
rms =
ρ0 c 2 ρ0 c

Then, for a specific problem under consideration, the acoustic pressure amplitude P is found by applying
a boundary condition, for instance respecting an acoustic source in a duct (see above).

In general, when using the complex exponential representation of acoustic variables, thus considering time-
harmonic oscillations, the time-harmonic [average] intensity is computed from

1
I = Re pu*  (4.5.4)
2 

where the asterisk denotes the complex conjugate, the vector notation on particle velocity is retained, and
the 1/2 multiple results from the time-harmonic integration. As such, the computation of (4.5.4) may omit
time in the final expression of intensity. Also, this time-harmonic intensity is only a vector if u spans more
than one dimension.

1 1 2
Note that for a complex function x , Re  x ⋅ x*  = xrms
2
= x .
2 2

4.6 Harmonic, spherical acoustic waves

For spherically symmetric sound fields, use of the appropriate Laplacian operator in (4.3.1.9) and use of
the chain rule of differentiation allows us to express the wave equation as

∂ 2 ( rp ) 1 ∂ ( rp )
2

= (4.6.1)
∂r 2 c 2 ∂t 2

It is apparent that this equation is similar to (4.3.1.8) with the variable mapping p ↔ rp . Thus, we can use
the same, general solution to the wave equation via

rp= f1 ( ct − r ) + f 2 ( ct + r ) (4.6.2)

Then, we express the acoustic pressure, without loss of generality as

1 1
,t )
p ( r= f1 ( ct − r ) + f 2 ( ct + r ) (4.6.3)
r r

which is valid for all r > 0 . For the first term of (4.6.3), the waves are outgoing, from the origin r = 0 , and
spread in greater and greater areas from the origin. For the second term of (4.6.3), the waves are incoming

58
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

and increase in intensity approaching an origin. In this second case, there are few examples in linear
acoustics that pertain to such self-focusing of sound energy by a wavefield converging to an origin. So we
hereafter neglect the second term of (4.6.3).

By and large, the most important spherically-spreading acoustic waves are harmonic, in which case we may
assume a solution to (4.6.1) of the form

A j (ωt − kr )
p ( r, t ) = e (4.6.4)
r

Using the Euler's equation (4.3.19) applicable to all coordinate systems, and knowing (Appendix A7 [1])
∂p 1 ∂p 1 ∂p
p r
∇= +θ +φ , the particle velocity is derived as
∂r r ∂θ r sin θ ∂φ

1 p ( r, t )
) 1 − j 
u ( r , t= (4.6.5)
 kr  ρ0 c

∂p
where the time-harmonic form of Euler's equation, − = jωρ 0=
u jk ρ 0 cu , may also be used to derive the
∂r
same result.

Using (4.6.4) and (4.6.5), the specific acoustic impedance for spherical waves is found to be

p jkr kr
z= = ρ0 c = ρ0 c e jθ= ρ 0 c cos θ e jθ (4.6.6)
u 1 + jkr 1 + ( kr )
2

cot θ = kr (4.6.7)

Importantly, unlike for plane waves that exhibit a specific acoustic impedance of z = ρ 0 c , the specific
acoustic impedance of spherical waves is a complex quantity, suggesting both energy transfer (real part)
and energy exchange (imaginary part). As discussed in Sec. 4.7, the relative balance of these two quantities
is primarily dependent upon the non-dimensional term kr .

The magnitude of the impedance is

P
z
= = ρ 0 c cos θ (4.6.8)
U

such that the relation between the pressure and particle velocity amplitudes is

P = ρ 0 cU cos θ (4.6.9)

Considering an alternative expression for the spherical wave pressure (4.6.4) by making the amplitude A
real, we have

A j (ωt − kr )
p ( r, t ) = e (4.6.10)
r

59
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

such that the real component of the pressure, which is the actual acoustic pressure, is

A
p ( r, t )
= cos [ωt − kr ] (4.6.11)
r

Using (4.6.8), the particle velocity amplitude is

1 A 1 (4.6.12)
U =
ρ 0 c r cos θ

while, by (4.6.6), the particle velocity is shifted in phase with respect to the acoustic pressure by an amount
θ . Thus, the real component of the particle velocity is
1 A 1 (4.6.13)
=u ( r, t ) cos [ωt − kr − θ ]
ρ 0 c r cos θ

4.6.1 Spherical wave acoustic intensity and acoustic power


The acoustic intensity for spherical waves is computed in the same manner as for plane waves, it is the time
average of work per unit area that fluid particles exert on neighboring particles:
T
1
P cos [ωt − kr ]U cos [ωt − kr − θ ] dt
T ∫0
=I (4.6.1.1)

Carrying out the integration (4.6.1.1) shows that

P2 (4.6.1.2)
I=
2 ρ0 c

which is the same as for plane waves. In fact, (4.6.1.1) is exactly true for plane and spherical waves.

On the other hand, what the amplitude P constitutes is not the same between plane and spherical waves.
As shown above, the pressure amplitude for plane waves is not a function of distance from the acoustic
source plane. In contrast, for spherical waves the pressure amplitude is inversely proportional to the radial
(
distance from the acoustic source origin (4.6.11). As a result, we express (4.6.1.2) using I = A2 / 2 ρ 0 cr 2 )
which shows that the acoustic intensity for spherical waves varies in proportion to ∝ 1 / r 2 . This means that
the energy radiated to a field point that is a distance r from the spherical wave origin (source) is reduced
in proportion to the squared distance between source and point, and explains why sounds decay as the waves
travel a distance from the origin point[s].

In general, the acoustic power (sometimes also termed sound power) is the sound energy per time radiated
by an acoustic source, and is defined according to the intensity via

=
Π ∫
S
I ⋅ ndS (4.6.1.3)

where S is a surface that encloses the sound source and n is the unit normal to the surface. Thus the dot
product in (4.6.1.3) refers to the component of the intensity vector that is normal to the enclosing surface

60
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

under consideration. For a given radial distance from a source of sound, if the intensity has the same
magnitude, then the acoustic power is simply Π =IS .

For plane and spherical waves, only one direction of wave propagation is considered, either a single axis of
one-dimensional motion or a radial direction spreading, respectively. For spherical waves, the area through
which the sound spreads is 4π r 2 . Thus, the acoustic power is
2
prms (4.6.1.4)
=Π 4π=
r 2 I 4π r 2
ρ0 c

4.7 Comparison between plane and spherical waves

A comparison of the plane wave acoustic pressure and particle velocity with those corresponding values
for spherical waves shows significant differences. It is instructive to consider the term kr which can be

expressed, using (4.2.11), as ( 2π / λ ) r . Thus, when kr < 1 the observation radial position of the acoustic
pressure and velocity variables is approximately within one acoustic wave wavelength from the sound
origin.

Thus, when kr < 1 , the imaginary component of the spherical wave particle velocity (4.6.5) is as important
or more significant than the real component. This means there is a phase difference between
increases/decreases in the pressure and particle velocity for spherical waves near the origin of the sound
source from which the waves propagate. As a result, the specific acoustic impedance of spherical waves
near the sound source contains real and imaginary components, Figure 29 and (4.6.6).

For radial distances far from the acoustic origin, kr >> 1 , the imaginary component of the particle velocity
(162) is insignificantly small, and the spherical wave particle velocity is in-phase with the acoustic pressure.
Figure 29 shows this trend that the imaginary contribution of the specific acoustic impedance of spherical
waves converges to zero for kr >> 1 while the real component approaches that of plane waves.
specific acoustic impedance normalized by ρ c
0

0.8

0.6 plane wave


real(spherical wave)
imag(spherical wave)
0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
kr, normalized distance from acoustic source origin

Figure 29. Specific acoustic impedance normalized by plane wave specific acoustic impedance.

Comparing plane and spherical acoustic waves, we refer to Table 5.

61
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Table 5. Comparison between plane and spherical acoustic waves

plane waves spherical waves

radiate in direction x radiate in direction r

pressure amplitude is constant as distance x changes pressure amplitude decreases as ∝ 1/ r


acoustic intensity is proportional to prms
2
acoustic intensity is proportional to prms
2
and decreases as

∝1/ r2
pressure and particle velocity are in phase pressure and particle velocity are only in phase for kr >> 1 ,
and in general are not in phase

the specific acoustic impedance is real the specific acoustic impedance is real for kr >> 1 , and in
general is complex

Thus, for point acoustic sources (infinitesimally small radiating spheres), we refer to the acoustic far field
as the location in space where spherical wave characteristics approach those of plane waves, where kr >> 1
. In the acoustic far field, spherical waves have the following characteristics

• the acoustic pressure and the particle velocity are in-phase


• the acoustic intensity is in the radial direction r
• the specific acoustic impedance is purely real and equal to ρ 0`c

In general, the acoustic far field refers to a long distance between the receiving point of acoustic pressure
waves and the acoustic wavelength. This is evident=
by kr π r / λ 2π ( r / λ ) >> 1 .
2=

4.8 Decibels and sound levels

On a typical day, you may directly encounter many orders of magnitude in acoustic pressure changes. The
human ear can detect acoustic pressures as little as 20 [μPa] and as great as 60 [Pa] (at the point of feeling
actual pain) across the bandwidth of frequency 20 [Hz] to 20 [kHz] which is known as the the acoustic
frequency bandwidth [13].

Table 6 lists a typical range of acoustic pressures for various acoustic sources. This large range of pressures
is difficult to characterize using linear scales, particularly because energy-based acoustic quantities are
proportional to the square of pressure. Thus, in acoustics, we very often use logarithmic scales.

The sound pressure level is defined


2
prms
SPL = 10log10 2 (4.8.1)
pref

where pref is the reference sound pressure and is often taken to be pref =20 μPa for sound in air. The units
of SPL are decibels [dB], in honor of Alexander Graham Bell, the inventor of the telephone and other

62
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

novelties. It is common to refer to this unit phonetically: saying "dee bee". Equation (4.8.1) may also be
written using
prms (4.8.2)
SPL = 20 log10
pref

Finally, recall that prms = P / 2 where P is the amplitude of the acoustic pressure. The sound pressure
levels for the acoustic sources in Table 6 are given in the right-most column to correspond to the pressures
in [Pa] in the adjacent column. The RMS pressure in (4.8.2) may be associated with a single frequency or
may be associated with many frequencies via averaging, more on this matter in the next section, such that
we use (3.10.19) to determine the total RMS pressure.
Table 6. Range of acoustic pressures and sound pressure levels (SPL) of various sound sources and distances from them.
Data compiled from https://en.wikipedia.org/wiki/Sound_pressure

Sound pressure
Source of sound Distance Acoustic pressure [Pa]
level [dB]

.30-06 rifle being fired 1 m to side 7,265 171

Jet engine 1m 632 150

Threshold of pain At ear 63-200 130-140

Trumpet 0.5 m 63.2 130

Risk of instantaneous noise-induced hearing loss At ear 20 120

Jet engine 100 m 6.32–200 110–140

Jack hammer 1m 2 100

Traffic on a busy roadway 10 m 0.2–0.632 80–90

Hearing damage (over long-term exposure, need not be continuous) At ear 0.356 85

Passenger car 10 m (2–20)×10−2 60–80

EPA-identified maximum to protect against hearing loss and other


disruptive effects from noise, such as sleep disturbance, stress, learning Ambient 6.32×10−2 70
detriment, etc.

TV (set at home level) 1m 2×10−2 60

Normal conversation 1m (2–20)×10−3 40–60

Very calm room Ambient (2–6.32)×10−4 20–30

Light leaf rustling, calm breathing Ambient 6.32×10−5 10

Auditory threshold at 1 kHz At ear 2×10−5 0

63
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

In addition to sound pressure level, we are also often interested in sound power level
Π (4.8.3)
LΠ = 10 log10
Π ref

where Π ref =10-12 [W]. A less commonly used sound level is the intensity level

I (4.8.4)
IL = 10 log10
I ref

where I ref =10-12 [W/m2].

Considering the relations between acoustic pressure, intensity, and power, equations (4.6.9), (4.6.1.2), and
(4.6.1.4), the sound levels for spherical waves may be collectively related through the following expressions
I (4.8.5)
=LΠ 10 log10 + 10 log10 4π r 2
I ref

prms 4π r 2 pref
2

=LΠ 20log10 + 10log10


pref Π ref ρ 0 c (4.8.6)
SPL + 20log10 r + 11
=

Example: What is the SPL of the acoustic pressure in a duct with an end-firing piston harmonically
oscillating with amplitude of displacement D =20 [μm], at a frequency f =120 [Hz]? Assume the air
temperature is 20 [°C].

Answer: The amplitude of the acoustic pressure is p ( x, t )= P= ωρ0 cD . Using the provided information,
and seeing above that the specific acoustic impedance ρ 0 c of air at 20 [°C] is 415 [Pa.s/m], we find that

=P ( 20 ×10−6 )
( 2π 120 )( 415)= 6.258 [Pa]. Thus, the RMS pressure is 4.425 [Pa]. The SPL is found to be
107 [dB]. Note that there is no spreading of acoustic energy for these plane waves and thus the sound The
sound power level LΠ would be computed by a few steps. First, the intensity (4.6.1.2) is I = P 2 / 2 ρ0 c . The
sound power is then the intensity multiplied by the area through which the sound propagates, which is the
duct area, say S . Then the sound power level LΠ would be computed from (4.8.3).

Example: For a spherical acoustic wave, at 1 [m] the SPL is 80 [dB]. What is the SPL at 2 [m]? At 4 [m]?

( )
Answer: At 1 [m], the pressure amplitude is approximately pref 2 10 SPL / 20 = 0.2828 [Pa]. Considering

(4.6.11), the pressure amplitude at 2 [m] will be one-half of what it is at 1 [m], while at 4 [m] the amplitude
will be one-quarter of what it is at 1 [m]. Thus, the corresponding, respectively SPLs will be 74 [dB] and
68 [dB]. It is seen that with each doubling in distance, the SPL for a spherical acoustic wave reduces by 6

64
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

[dB] while the pressure is cut in half. Note that by (4.8.3), the sound power level LΠ remains the same at
all distances for one source encompassed in a given volume, since the reduction of SPL in (4.8.6) is
counterbalanced by the 20 log10 r that increases as r increases.

4.8.1 Combining sound pressure levels


Because of the time-harmonic characteristic, acoustic pressures (4.6.10) combine in the same way as
oscillations, see Sec. 3.10. As shown for the SPL (4.8.2) and sound power level LΠ (4.8.3), the RMS
pressures are needed in the computation. Thus, the determination of a total SPL or total LΠ due to a
collection of acoustic sources requires a first determination of a total RMS pressure.

Example: Find the total SPL for three incoherent sources that have individual SPLs of 74, 83, and 80 [dB].

Answer: The individual RMS pressures corresponding to these SPLs are

prms
=74 20log10 → prms =0.1002 [Pa]
20 × 10−6

prms
=83 20log10 → prms =0.2825 [Pa]
20 × 10−6

prms
=80 20log10 → prms =0.2000 [Pa]
20 × 10−6
1/ 2
The total RMS acoustic pressure is then found using (3.10.19): prms ,total = 0.10022 + 0.28252 + 0.20002 
=0.3603 [Pa]. Therefore, the total SPL is

0.3603
=SPLtotal 20log
= 10 85.1 [dB]
20 × 10−6

65
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

5 Elementary acoustic sources and sound propagation characteristics

Although many important contexts involve plane acoustic waves, such as sound propagation in ducts, by
and large the predominant type of acoustic energy sources generate spherical waves. The spreading of
spherical waves according to the distance r from an origin or source distinguishes this wave type from
plane waves. As a result, sound-spreading phenomena are critical to identify and understand. This section
explores the characteristics of elementary acoustic sources that generate spherical waves.

5.1 Monopole and point acoustic sources

The general solution (4.6.4) to the acoustic wave equation assuming symmetrically spreading spherical
waves is repeated again here for convenience

A j (ωt − kr )
p ( r, t ) = e (5.1.1)
r

Consider that a sphere of radius a harmonically oscillates in its radial direction with a velocity of Ue jωt ,
Figure 30, and therefore has a displacement amplitude U / ω . This acoustic source is called the monopole.
Imagine this to be a spherical balloon being expanded and contracted uniformly in the radial dimension
according to the period 2π / ω . To satisfy the boundary condition at the sphere surface, the particle velocity
of the acoustic fluid must be equal to the surface normal velocity of the sphere, which is everywhere U .
Therefore, we have that

 1  p ( a, t )
u ( a, t ) =
1 − j  Ue jωt
=
 ka  ρ 0 c
(5.1.2)
 1  A j (ωt − ka )
1 − j 
= e = Ue jωt
 ka  a ρ 0 c

jka (5.1.3)
A = ρ 0 cUa e jka
1 + jka

Consequently, the pressure field generated by a monopole is

a jka j ω t − k ( r − a ) 
(5.1.4)
p = ρ 0 cU e 
r 1 + jka

By substitution, the particle velocity is

a j ωt − k ( r − a )
u ( r, t ) = U e (5.1.5)
r

Note that (5.1.5) meets the boundary condition that r = a, we have u ( a, t ) = Ue jωt . Note also that (5.1.4) is
only valid r > a. This is similar to the general form of spherical wave pressure (5.1.1) that is only valid for
r >0.

66
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 30. Monopole in spherical coordinate system showing location vector r denoting point (grey) in field.

At distances satisfying ka << 1 , which is equivalent to 2π ( a / λ ) << 1 , the pressure is

a
p = j ρ 0 cU ( ka ) e j (ωt − kr ) (5.1.6)
r

The monopole acoustic source that satisfies ka << 1 is termed the point source. The acoustic pressure of
 1  p ( r, t )
the point source is (5.1.6) while the particle velocity is u ( r , t=
) 1 − j  in the general case as
 kr  ρ0 c
1
repeated from (4.6.5). Repeating (4.5.4), the time-harmonic acoustic intensity is I = Re pu*  . Because
2 
there is only one acoustic source in the field, here the intensity is always in the radial direction, is spherically
symmetric, and is the same when averaged over one period of pressure fluctuation. Thus, the vector notation
1
ρ0 cU 2 ( a / r ) ( ka ) . The acoustic intensity and
2 2
can be dropped for the average intensity expressed as I =
2
pressure and particle velocity of the monopole are presented in Figure 31. It is seen that close to the source,
the particle velocity and pressure are not in-phase (bottom inset), while considerably further away from the
monopole these two quantities gradually become in-phase (top inset). This is in agreement with the prior
discussion from derivation in Sec. 4.7. The MATLAB code used to generate Figure 31 is given in Table 7.

67
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

acoustic intensity, black. acoustic pressure, red. acoustic velocity, blue


0.06

2
0.04

0.02
1.5

y axis [m]
0

1
-0.02

0.5 -0.04
y axis [m]

-0.06

0 1.65 1.7 1.75 1.8


x axis [m]
0.08

-0.5 0.06

0.04

-1 y axis [m]
0.02

-1.5
-0.02

-0.04

-2
-0.06

-1.5 -1 -0.5 0 0.5 1 1.5 -0.08 20.48 0.5


2.50.52 0.54
3 0.56
3.5
0.58 0.6 0.62 0.64
x axis [m] x axis [m]

Figure 31. Plot of particle velocity, acoustic pressure, and intensity for a monopole point source. The insets show how
the pressure and particle velocity either are or are not in phase at certain distances from the source. The plots are offset
by small values in the y-axis for ease of visualization.

As described in Sec. 4.7, in the limit that kr >> 1 we have the condition of the acoustic far field. Note that
kr is a non-dimensional parameter. When more than one source is considered, as described below, there
is another non-dimensional parameter of importance to take into account in the modeling of acoustic field
generation.
Table 7. MATLAB code to generate Figure 31.

clear all
theta=linspace(-90,90,31)*pi/180; % [rad]
omega=3e2*2*pi; % [rad/s]
c=343; % [m/s]
r=c/(omega/2/pi)*linspace(.1,2,81); % [m]
rho_0=1.125; % [kg/m^3] density
k=omega/c; % [1/m] wavenumber
a=.01; % [m] radial dimension of monopole
U=1e-2; % [m/s] oscillation velocity of monopole
x=zeros(length(theta),length(r));
y=x;
pressure=j*rho_0*c*U./r*k*a*a.*exp(j*(-k*(r))); % [Pa]
velocity=(1-j*1/k./r).*pressure/rho_0/c; % [m/s]

68
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

for iii=1:length(theta);
for jjj=1:length(r)
x(iii,jjj)=r(jjj)*cos(theta(iii)); % x unit vector
y(iii,jjj)=r(jjj)*sin(theta(iii)); % y unit vector
pressure_x(iii,jjj)=pressure(jjj)*cos(theta(iii));
pressure_y(iii,jjj)=pressure(jjj)*sin(theta(iii));
velocity_x(iii,jjj)=velocity(jjj)*cos(theta(iii));
velocity_y(iii,jjj)=velocity(jjj)*sin(theta(iii));
intensity_x(iii,jjj)=1/2*real(pressure_x(jjj)*conj(velocity_y(jjj)))*cos(theta(iii));
intensity_y(iii,jjj)=1/2*real(pressure_x(jjj)*conj(velocity_y(jjj)))*sin(theta(iii));
end
end
figure(1);
clf;
quiver(x,y-1e-3,intensity_x,intensity_y,'k');
hold on
quiver(x,y,real(pressure_x),real(pressure_y),'r');
quiver(x,y+1e-3*ones(size(y)),real(velocity_x),real(velocity_y),'c');
axis equal
xlabel('x axis [m]');
ylabel('y axis [m]');
title('acoustic intensity, black. acoustic pressure, red. acoustic velocity, blue');

For monopole sources oscillating with acoustic wavelengths sufficiently greater than the radial dimension
a such that a / λ << 1 , the result of (5.1.6) holds while we moreover term these sources point sources.
Rather than strictly define these sources according to the radial dimension, the important parameter to
consider is ka which is a non-dimensional ratio of the source size and acoustic wavelength.

Note that the general expression of (5.1.1) applies in the case of (5.1.4) and (5.1.6), that the complex
amplitude A is an amplitude adjustment with respect to the actual source dimensions and oscillation
amplitude, and that the pressure is 90° out-of-phase with the sphere surface velocity. Thus, in general, we
determine relations for the point source acoustic wave radiation of

A j (ωt − kr )
p= e (5.1.7)
r

which is identical to (4.6.10) when there is no phase difference for the consideration of a reference starting
time and the oscillation of the pressure, thus A = A .

5.2 Sound fields generated by combinations of point sources

Consider the two point sources arranged as shown in Figure 32(a). The unit vectors e1 and e2 respectively
denote the vectors extending from the point sources 1 and 2 to the field point where we seek to determine
the acoustic pressure p . Note that the schematic of Figure 32(a) can represent two point sources that are
not necessarily in the same location in the z axis (out-of-page) through the definition of the arbitrary unit
vectors.

69
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 32. Two point sources radiating sound pressure to a field point.

Here, we deal with the linear acoustic wave equation without boundaries in the field (the free field), such
that the total acoustic pressure at the field point is simply the superposition of acoustic waves arriving at
the point due to any number of sources at other locations. The corresponding statement regarding the
particle velocities associated with each source is true: the total is the summation of the individual
components. To determine the pressure at the field point, several factors must be defined about the sources.
We must identify if the sources create acoustic pressure at the same or different frequencies. If the sources
radiate pressure at the same frequency (termed coherent or correlated) then we must identify the relative
phase difference of the sources. Knowing these characteristics, we must then know the location of the
sources with respect to the field point.

Example: Consider the arrangement shown in Figure 32(b). Given that r1 =2.3 [m] and r2 =1.4 [m], find and

plot the real (measurable) part of the time-harmonic sound pressure p = Re [p ] at the field point indicated
when the two point sources at 2 [kHz] are (a) perfectly in-phase and (b) perfectly out-of-phase. Assume
that the SPL at the field point due only to point source 1 is 77 [dB] and the SPL at the field point due only
to point source 2 is 79 [dB]. Assume the fluid is air at a temperature of 20 [°C].

Answer: We must first determine the sound pressure amplitudes of the point sources. Using (4.8.2), we
have that P1 =0.2002 [Pa] and P2 =0.2521 [Pa]. Note that P1 = A1 while P2 = A2 . Therefore, we have that
r1 r2

A A  
 A A  

p= Re { p1 + p2 }= Re  1 e j (ωt − kr1 ) + 2 e j (ωt − kr2 ) = Re   1 e − jkr1 + 2 e − jkr2  e jωt 
 r1 r2    r1
 r2  

(a) When the sources are in-phase, we can expand the component of the equation in brackets as

A1 A
( cos kr1 − j sin kr1 ) + 2 ( cos kr2 − j sin kr2 )
r1 r2

Then we have

 A
 A  

=p Re   1 ( cos kr1 − j sin kr1 ) + 2 ( cos kr2 − j sin kr2 )  ( cos ωt + j sin ωt ) 
  r1
 r2  

70
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

A1 A
= ( cos kr1 cos ωt + sin kr1 sin ωt ) + 2 ( cos kr2 cos ωt + sin kr2 sin ωt )
r1 r2

From Appendix A10 [1], the sound speed in air is 343 [m/s]. Putting the results together and combining
terms, we have that

−0.03911cos [ 2π 2000t ] + 0.3217sin [ 2π 2000t ]


p=

when the sources are in-phase. Using a vector expression, we express this via

p = P Re  e j (ωt −φ ) 

where P= 0.039112 + 0.3217 2= 0.3241 [Pa] and


= tan φ 0.3217 / ( −0.03911) .

(b) When the sources are out-of-phase, the expression from above is modified only by applying a leading
negative sign to one of the sources, such as

A1 A
p
= ( cos kr1 cos ωt + sin kr1 sin ωt ) − 2 ( cos kr2 cos ωt + sin kr2 sin ωt )
r1 r2

This results in

−0.3005cos [ 2π 2000t ] − 0.1094sin [ 2π 2000t ]


p=

that is put into a vector form as above such that P =0.3198 [Pa] and φ =-160°. Comparing the SPLs of in-
and out-of-phase, we find that SPLin =81.2 [dB] and SPLout =81.1 [dB]. Considering a plot of the time series
of the in- and out-of-phase point sources shows that they are nearly in quadrature. To be in quadrature
means an almost 90° phase difference occurs, here based on the relative distances from the field point and
the frequency under consideration. Therefore, the two sources neither constructively nor destructively
interfere to a significant extent. Thus while there are two point sources, there is not a significant increase
in the SPL due to the combination at the specific field point.
p1, black solid. p2, red dashed. real(p1+p2), blue dash-dot p1, black solid. p2, red dashed. real(p1-p2), blue dash-dot
0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1
pressure [Pa]

pressure [Pa]

0 0

-0.1 -0.1

-0.2 -0.2

-0.3 -0.3

-0.4 -0.4
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
time [s] -3 time [s] -3
x 10 x 10

Figure 33. Time series of the acoustic pressure from the example.

71
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

In general, as indicated in Sec. 3.10, combining the common sinusoidal components of acoustic pressure is
undertaken according to linear superposition, via

p = ∑ p i and p = Re [p ] (5.2.1)
i

Likewise

u = ∑ ui and u = Re [u ] (5.2.2)
i

In this way, we may compute the acoustic intensity via (4.5.4), repeated here again for convenience

1
I = Re pu*  (5.2.3)
2 

Unlike for a single point source, however, (5.2.3) is in general a multidirectional vector. Because each point
source has radial components of particle velocity with respect to its own point source center, the
determination of (5.2.2), towards finding (5.2.3), will uncover a multidirectional particle velocity in the
global coordinate system.

5.2.1 Directional wave propagation in geometric and acoustic far field acoustic wave radiation
Consider the arrangement of point sources along a y -axis as shown in Figure 34. The angle θ refers to the
angle between the normal to the y -axis and the field point. We refer to the angle θ as the elevation angle
and denote θ =0 to be called broadside.

When the distance between the point sources 1 and 2 is small with respect to the distance to the field point,
d
d << r , the effective distances of sources 1and 2 to the field point are respectively r − sin θ and
2
d
r+ sin θ . The condition r >> d (alternatively, r / d >> 1 ) is referred to as the geometric far field.
2

 In general, the geometric far field refers to a long distance between the receiving point of acoustic
pressure waves and the characteristic dimension of the source.

Figure 34. Two point sources exciting spherical acoustic pressure waves at the same frequency as transmitted to the field
point.

72
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Given that the two sources are driven with the same amplitude A [Pa/m] and at the same angular frequency
ω , the acoustic pressure at the field point is
p= p1 + p 2
A j (ωt − kr1 ) A j (ωt − kr2 )
= e + e
r1 r2

A  j  ωt − k  r − 2 sin θ   j  ωt − k  r + sin θ   
  d    d 

e  + e   2   (5.2.1.1)
r 

A j (ωt − kr ) − jφ
= e e + e jφ 
r
A
= 2 e j (ωt − kr ) cos φ
r

where φ = ( kd / 2 ) sin θ is half the difference in phase between the acoustic waves transmitted from the two
sources. The denominators adopt the same distance r since the amplitude differences associated with the
minor changes ∆ri are extremely small in the geometric far field. On the other hand, the complex
exponential repeats every 2π . Thus, without further assumptions regarding kd , we must retain the
d
expansions r ± sin θ in the complex exponentials.
2

We can express (5.2.1.1) in a simplified form via

2 A j (ωt − kr )
p = H (θ ) e (5.2.1.2)
r

where H (θ ) =
D (θ ) =
cos ( kd / 2 ) sin θ  is referred to as the beam pattern of the two-point source array.
The term array denotes more than one coherent source projects acoustic waves to a field point. Note that
the beam pattern is a magnitude of the trigonometric function. As such, the beam pattern varies from 0 to
1 as a function of the elevation angle, excitation frequency (via the wavenumber), and the distance between
the two sources. Since kd is a non-dimensional parameter that represents the ratio of the point sources
separation (times 2π ) to the acoustic wavelength, it is logical to investigate the roles that this parameter
plays on determining the directivity. For arrays distributed in planes and three-dimensional surfaces, rather
than only in the line considered here, the directivity 0 ≤ D ≤ 1 will be a function of other angular
coordinates of the field point with respect to the geometric center of the acoustic source array.

Figure 35 presents results of the beam pattern for the values of kd = [0.25,2,4,10] , plotted in [dB] via
10log10D (not 20log10D=10log10D2 because the beam pattern is not a mean-squared quantity). Note that a
plot of the beam pattern directly enables us to identify the change in SPL as a function of parameters kd
and θ since SPL values are only otherwise tailored by the leading constants given for the pressure
expression in (5.2.1.2). The plot at left in Figure 35 shows that for values of kd less than 1, the radiated

73
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

kd=0.25, blue. kd=2, green. kd=4, red. kd=10, cyan


90 40
kd=0.25, blue. kd=2, green. kd=4, red. kd=10, cyan 120 60
0 30

-2
150 20 30
-4
directivity [dB]

10
-6

180 0
-8

-10

-12 210 330


-80 -60 -40 -20 0 20 40 60 80
elevation angle [degrees]

240 300
270
Figure 35. Plots of beam patterns for two-point source array with sources in-phase at the same frequency. The right-
most plot is generated using the polar command in MATLAB and artificially adding 40 [dB] to the beam pattern because
negative values in polar plots result in strange plotting behaviors. The yellow circles in the right-most plot are
representative of the locations of the sources.
sound pressure exhibits no dependence upon change in the elevation angle θ (blue curve). For kd slightly
greater than 1 (green curve), the response begins to exhibit a directive propagation towards broadside, that
is there is a reduction in the beam pattern of about 2.5 [dB] for elevation angles +/- 90° that gradually
develops for increasing values from 0° elevation angle. This directive response becomes exaggerated as kd
is more substantially greater than 1. For the red curve kd =4, there is a major lobe that is centered on
broadside θ =0°. On both sides of the lobe, there are pressure nodes where the directivity drops off by more
than 12 [dB], creating effective zones of silence for field points around +/- 50°. The term node denotes a
substantial reduction in the pressure, similar to the nodes of vibration modes of distributed structures. For
kd >>1, the example of kd =10 (cyan curve) shows that more pressure nodes appear, each stood off by side
lobes that are angular regions of large sound pressure level.

Ordinarily, due to the elevation angle dependence (or multi-angle dependence), plots of beam patterns are
shown in spherical coordinates to accommodate the logical presentation. For the case of study from Figure
34, there is azimuthal angle symmetry, such that the two-dimensional representation of beam patterns can
take advantage of the polar plot command in MATLAB. The right-most plot of Figure 35 shows one such
plot. This is the same data as in the left-most plot, but there is a more intuitive understanding of the trends.
According to MATLAB's convention, no distinction is made between the negative elevation angle values

and the wrapping of 2π phase, which explains the designation of elevation angles θ [ −90,0] =
[ 270,360]
°. The code used to generate Figure 35 is provided in Table 8. Note that the beam pattern presented in the
polar plot of Figure 35 is symmetric with respect to the y -axis of the plot, although only the portion

spanning [ −90,90] ° is presented. Considering Figure 35, a more intuitive presentation of the beam pattern
shows that at broadside θ =0° each case of kd results in a maximum of acoustic pressure: a major lobe. On

74
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

the other hand, some cases of kd such as kd =4 lead to large major lobes separated by a pressure node after
which, for further increases in the elevation angle θ to +/- 90° a substantial side lobe occurs which leads to
large SPL in the far field when the observing field point is oriented along the axis of the array. This condition
is called the end-fired array.
Table 8. MATLAB code used to generate Figure 35.

kd=[0.25 2 4 10]; % [nondimensional]


theta=linspace(-90,90,401)*pi/180; % [rad]
D=zeros(length(theta),length(kd));
for iii=1:length(kd)
D(:,iii)=cos(kd(iii)/2*sin(theta));
end
figure(1);
clf;
plot(theta*180/pi,10*log10(abs(D)));
xlabel('elevation angle [degrees]');
ylabel('directivity [dB]')
xlim([-90 90]);
ylim([-12 0]);
box on
title('kd=0.25, blue. kd=2, green. kd=4, red. kd=10, cyan');
figure(2);
clf;
addthis=40;
polar(theta,10*log10(abs(D(:,4)))'+addthis);
hold on
polar(theta,10*log10(abs(D(:,1)))'+addthis);
polar(theta,10*log10(abs(D(:,2)))'+addthis);
polar(theta,10*log10(abs(D(:,3)))'+addthis);
title('kd=0.25, blue. kd=2, green. kd=4, red. kd=10, cyan');

The origins of the directive characteristics is the relative amount of constructive or destructive interference
that occurs due to certain multiples of acoustic wavelengths spanning the distance d sin θ along the direction
r between the two sources. Consider Figure 36. Thus, in the geometric far field, r >> d , constructive
interference occurs when mλ = d sin θ with m = 1, 2,... , while destructive interference occurs when
2m − 1
λ = d sin θ . These are respectively exemplified in Figure 36(a) and (b), and are the origins of side
2
lobes and pressure nodes, respectively. Incomplete enhancement or cancellation of pressure occurs for
d sin θ that span these special constructive and destructive cases.

75
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 36. Far field waveforms propagated to a field point. (a) Constructive interference example. (b) Destructive
interference example.

5.3 Source characteristics

We see from (5.2.1.2) that the pressure amplitude may be expressed using p = Pax ( r ) D (θ ) . This indicates
that the amplitude of the acoustic pressure for the combined source can be considered as the product of a

component proportional to the far field axial pressure Pax ( r ) and a component related to the source beam

pattern D (θ ) . For the two, correlated, in-phase point source system, the axial pressure component is

Pax ( r ) = 2
A
, where A may be found from Re [ A ] such as when the points are very small monopoles
r
(5.1.3). The beam pattern for the two, in-phase point source system is D (θ ) =
cos ( kd / 2 ) sin θ  .

5.4 Dipole acoustic sources

Another important case occurs when the two sources are correlated (at the same frequency, also termed
coherent) but oscillate out-of-phase. Consider again Figure 34 but now the point sources oscillate out-of-
phase. By a similar derivation as above with r / d >> 1 , we have that

p= p1 + p 2
A j (ωt − kr )
 e  −e − jφ + e jφ  (5.4.1)
r
A
= 2 j e j (ωt − kr ) sin φ
r

where again φ = ( kd / 2 ) sin θ . The source that exhibits the pressure field given by (5.4.1) is referred to as
the doublet source. If we consider that kd << 1 , which is a manifestation of the fact that the frequency under
consideration has an acoustic wavelength that is very long with respect to the separation between the out-

76
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

kd=0.25, blue. kd=2, green. kd=4, red. kd=10, cyan


90
30
120 60

kd=0.25, blue. kd=2, green. kd=4, red. kd=10, cyan 20


0
150 30
-5 10

-10
directivity [dB]

180 0
-15

-20

210 330
-25

-30
-80 -60 -40 -20 0 20 40 60 80 240 300
elevation angle [degrees]
270
Figure 37. Plots of beam patterns for doublet source. The right-most plot is generated using the polar command in
MATLAB and artificially adding 30 [dB] to the directivity levels. The yellow circles in the right-most plot are
representative of the locations of the sources.
of-phase sources
= ( kd 2π ( d / λ ) << 1 ), then the pressure given by (5.4.1) may be simplified to (by small
angle assumption)

A j (ωt − kr )
p=2j e ( kd / 2 ) sin θ (5.4.2)
r

The acoustic source that exhibits the pressure field given by (5.4.2) is referred to as the dipole source.

There are numerous examples of dipole sources, including unbaffled loudspeakers at low frequencies,
axially rotating fans, air flow over an automotive spoiler (or other such vortex-shedding aeroacoustic events
in general), and the oscillation of a tuning fork prong. The shared feature of these sources is that such
dipoles provide an increase in acoustic pressure on one side and decrease in pressure on the other of the
source. At low frequencies, this results in an oscillation of pressure back and forth at the source location.
Thus an insignificant propagation of energy into the far field occurs along the axis normal to the array
characteristic length.

Figure 37 presents plots of beam patterns for the doublet source. For kd =0.25 (blue curves), the result is
conventional of the dipole since the acoustic wavelength is sufficiently large with respect to the two-source
spacing. Thus, in the far field, the beam pattern at broadside θ =0° (normal to the x -axis) is almost zero,
while the overall levels of sound transmitted to far field are respectively much less than for greater values
of the non-dimensional ratio kd . In fact, at all frequencies considered according to the parameter ratio kd
, far field sound radiation to broadside is ineffective with doublet sources. This is intuitive because the
pressure increase by one source is negated by the pressure decrease of its neighboring source when one
considers positions in spacing along the radial axis for θ =0°.

77
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

5.5 Reflection: method of images

When a source of spherical acoustic waves is positioned near to an acoustically hard plane (also called
reflective or rigid), sound waves are reflected without significant loss. Acoustically hard material surfaces
include unpainted cinder block, tile, solid concrete, and glass, to name a few. A few examples of common,
highly reflective acoustic environments are gymnasiums, public restrooms (lots of tile, easy to clean with
harsh chemicals), and parking lots (a highly reflective plane).

Figure 38 provides a schematic of the scenario that point source 1 is positioned a distance z = + d from an
acoustically hard surface. We are interested to determine the total acoustic pressure at the field point. The
acoustic pressure at the field point due to the direct line-of-sight transmission of the wave from point 1
(solid red curve) is

A j (ωt − kr− ) (5.5.1)


pd = e
r−

where the radial distance is


1/2
r− = ( z − d ) + y 2 + x 2 
2
(5.5.2)
 
where we have included both the dimensions x and y for the sake of completeness. Yet, we assume by the
notation here that the source and field points are at y =0. The subscript d denotes that this is the direct or
line-of-sight wave.

Now, because the normal component of the particle velocity due to the point source 1 must vanish at the

plane defined by ( x, y,0 ) -- otherwise the rigid surface would move in the z axis -- we introduce a second
point source (image 1) at location z = −d that is identical in amplitude, frequency, and phase as the point
source 1. By so doing, it is easy to show that the normal component of the particle velocity of point source

1 vanishes on the plane ( x, y,0 ) . Thus, the image source 1 provides a pressure to the field point of
A j (ωt − kr+ ) (5.5.3)
pr = e
r+

and is a distance
1/2
r+ = ( z + d ) + y 2 + x 2 
2
(5.5.4)
 
from the field point. The subscript r for the pressure denotes that this is a reflected component.

Therefore, when a point source is near to an acoustically reflective plane by normal distance + d , the total
sound pressure received at a field point can be computed from the combination of the direct, line-of-sight
sound pressure received from the point source and from an image, in-phase source of equal pressure
magnitude and phase located a normal distance - d from the reflective plane. The combination of sources

78
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

exactly eliminates the normal component of particle velocity for the point source 1 at the plane defined by
( x, y,0 ) , which is necessary according to the physics of the problem.
This strategy for determining the sound fields induced by sources near reflecting planes is referred to as the
method of images.

Figure 38. Total sound pressure at a field point due to a spherical acoustic source placed near an acoustically hard surface.
An application of the method of images.

Several important observations are made as a result of applying the method of images. Continuing the
derivation of the total acoustic pressure received at the field point, we find that the total acoustic pressure
at the field point is

1 1 
p = p d + p r = A  e − jkr− + e − jkr+  e jωt (5.5.5)
 r− r+ 

When the field point distance r is sufficiently greater than d cos θ , which indicates a geometric far field
condition, the trigonometry of (5.5.5) may be simplified to yield

∆r ≈ d sin θ
r− ≈ r − ∆r (5.5.6)
r+ ≈ r + ∆r

As a result, (5.5.5) is written

 
A j (ωt − kr )  e jk ∆r e − jk ∆r 
=p ( r, t ) e  ∆r + (5.5.7)
r ∆r 
1 − 1+ 
 r r 

Considering that we are interested in the geometric far field, r >> ∆r , in which case the radii ratios in the
denominators can be neglected when compared to 1. Then, applying Euler's identity to the remaining
exponential terms, we arrive at

79
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

A j (ωt − kr )
p ( r ,θ , t ) ≈ 2 e cos [ kd sin θ ] (5.5.8)
r

We note that (5.5.8) for one source near a reflecting plane is identical to (5.2.1.1) for two point sources in
the free field. [Note that in (5.5.8) d is half the separation distance between sources, whereas in (5.2.1.1)

d is the total separation distance]. We also note that (5.5.8) is valid for θ ∈ ( 0, π ) [rad] but is invalid for

θ ∈ ( −π ,0 ) [rad]. Thus, the method of images is an approach for determining the sound field only in the
actual acoustic domain but not within the reflecting domain.

An important new observation is made considering the reflected acoustic pressure from the acoustically
rigid surface. Note that there is only one source of acoustic pressure but that the total sound pressure at the
field point may be as much as twice the amplitude of pressure provided by the incident wave itself. As a
result, the directional acoustic wave propagation characteristics shown in (5.5.8) are the same as those for
the two point source scenario (5.2.1.1). Example contours of equi-sound pressure level are shown in Figure
39 while the code used to generate these results is provided in Table 9. For values of kd around 1, the point
source positioned near the reflecting plane becomes strongly directional in terms of propagating waves
effectively along the plane surface, θ =0. For higher kd values, the delivery of sound energy can also be
directional at higher angles θ >0.

Recall from Sec. 5.1 that the far field ka << 1 intensity of a point source is

1
ρ0 cU 2 ( a / r ) ( ka )
2 2
I= (5.5.9)
2

where U is the oscillating velocity of the sphere of radius a. Here, because the point source near the
reflecting plane provides an amplitude of acoustic pressure that is twice that of the point source in the free
field, we may express that U e = 2U , which leads to the discovery that the intensity from the point source
near ( r >> d ) the reflecting plane is 4 times that of the same point source in the free field.

80
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Recall that the acoustic power is the sound energy per time radiated by an acoustic source. Considering a
hemisphere enclosing the source, the acoustic power is =
Π ∫ S
I ⋅ ndS . For a point source in the free field,

we simply have that Π =4π r 2 I while for the point source near the reflecting plane we have Π e =2π r 2 I e .
Taking the ratio of these two powers shows that

Π e 2π r ( 4 I )
2

= = 2 (5.5.10)
Π 4π r 2 I

the acoustic power radiated by a source near a reflecting plane in the acoustic far field -- ka << 1 , kd << 1
-- is twice that of the same acoustic source in the free field.

Consider the limiting case of the above such that the point source is gradually brought closer to the plane
and d → 0 . In this way, the constraint kd << 1 is more easily met such that when the source is on the plane,
the above result is automatically obtained. In other words, one can double the radiated sound power merely
by mounting an acoustic source into a hard rigid plane. This is called baffling (and the plane called a
baffle). This discovery is the reason why speakers are mounted into rigid cabinets. More effective low
frequency sound radiators is achieved when the speakers are baffled in cabinets(where "low frequency" is
of course relative to the speaker size and acoustic wavelength).
Table 9. MATLAB code used to generate Figure 39.

clear all
kd=[0.4 .8 1.6 3.2]; % [nondimensional]
theta=linspace(0,180,101)*pi/180; % [rad]
r=linspace(0.05,1,51);
omega=2e3; % [rad/s]
c=343; % [m/s]
p_ref=20e-6; % [Pa]
SPL [dB re 20uPa] for kd = 0.4 and A = 0.05 SPL [dB re 20uPa] for kd = 0.8 and A = 0.05

0.8 0.8 95
95

0.6 0.6
90
90

0.4
z [m]

z [m]

0.4
85
85
0.2 0.2
80
80
0 0
75
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
x [m] x [m]

SPL [dB re 20uPa] for kd = 1.6 and A = 0.05 SPL [dB re 20uPa] for kd = 3.2 and A = 0.05
1 90 1
90
80 0.8
0.8
70 80
0.6 0.6
60
z [m]

z [m]

70
0.4 50 0.4

40 60
0.2 0.2
30
0 0 50
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
x [m] x [m]

Figure 39. Contours of equal sound pressure level for the point source positioned near to an acoustically rigid plane

81
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

d=0.01; % [m]
k=kd/d; % [1/m]
A=0.05; % [Pa]
for iii=1:length(theta)
for jjj=1:length(r)
x(iii,jjj)=r(jjj)*cos(theta(iii)); % location in x axis
y(iii,jjj)=r(jjj)*sin(theta(iii)); % location in y axis
for ooo=1:length(kd)
p(iii,jjj,ooo)=2*A/r(jjj)*cos(kd(ooo)*sin(theta(iii)));
end
end
end
figure(1);
clf;
for iii=1:length(kd)
subplot(2,2,iii)
contour(x,y,20*log10((real(squeeze(p(:,:,iii))))/p_ref),10);
xlabel('x [m]');
ylabel('z [m]');
axis equal
colorbar
title(['SPL [dB re 20uPa] for kd = ' num2str(kd(iii)) ' and A = ' num2str(A) '']);
end

There is an important ramification to the method of images:

• Acoustic waves need not be single frequency since the superposition of general spherical wave
1
solutions for the direct and image sources, according to the form
= p f ( ct − r ) , satisfies the
r
boundary condition where the normal component of particle velocity vanishes at the plane defined

by ( x, y,0 ) . Thus, the method of images is applicable for all waveforms: single frequency,
transient, stochastic, etc.

The method of images can be extended to cases of greater number of reflecting number of planes, such as
the example shown in Figure 40 where two reflecting planes are involved. In Figure 40, similarly colored
angle markers have equal angle value according to the law of reflection. In total, Table 10 provides a
summary of the various ways in which point sources may be configured and baffled with respect to
acoustically rigid planes so as to enhance the sound power provided by the source to the acoustic far field.
Figure 41 illustrates each of the scenarios featured in Table 10. Thus, based on the development for a point
source positioned near a single reflecting plane and gradually drawing near to it so as to increase the sound
power radiation for all frequencies (of wavelength sufficiently greater than the source dimension), it is clear
that the position of sources of sound with respect to rigid surfaces is of utmost importance towards the
effectiveness of radiating acoustic energy to distant locations in space.

82
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 40. Method of images applied for a point source near to two reflecting planes. The colored arrow pairs denote that
angles of the reflecting waves are symmetric.

Table 10. Summary of influence of rigid baffles on the radiation of sound with illustrations of the scenarios shown in Figure
41.

Multiples of acoustic pressure


amplitude and sound power with
Multiples of acoustic intensity with
Scenario respect to free field radiation
respect to free field radiation
(equivalent [dB] additive change in
SPL and LΠ )

(a) Free field 1 (0) 1

(b) Point source mounted in rigid


2 (6) 4
baffle/plane

(c) Point source at intersection of two


4 (12) 16
rigid baffles

(d) Point source at a corner of three


8 (18) 64
rigid baffles

83
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 41. Acoustic sources in various configurations with respect to rigid baffles (or reflecting planes) positioned for sound
power radiation enhancement. Image sources http://www.mcmelectronics.com/ http://www.dibanisa.org/
http://www.itechnews.net/2008/08/30/jbl-control-now-corner-mount-speakers/ http://www.audioasylum.com/

5.6 Sound power evaluation and measurement


Based on the prior developments, a source of sound (e.g., engine, appliance, machine) may be examined as
representing numerous point sources over the source surface in such close proximity that the many sources
radiate waves similar to a single system-level source, Figure 43. In other words, at distances far from the
source center, the fact that sound radiation may emanate from several key surfaces or features does not so
much as matter as the fact that all of these wave-radiating elements contribute to one system-level
production of sound power.

From this concept, we may determine the total radiated sound power from any given source of sound.
Consider that the maximum characteristic dimension of the source is d , which may be the longest diagonal
dimension of the object, such as that shown in Figure 43. Then, to determine the sound power radiated from
this source, we measure the SPL at locations in the acoustic field around the source when placed on a
reflecting plane. No reflections from the surroundings may occur to interfere with the SPL measurements.
This means that measurements must be taken in an anechoic chamber or in another location (such as
outdoors) where reflections can be minimized. The hemisphere must have radius r selected such that the
SPL measurement is in the far field respecting the physical size of the source. The equation for determining
this radial distance is [13]

84
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

πd2 2 rλ
r >> or >> 1 (5.6.1)
2λ πdd
The (5.6.1) reflects the fact that spherically radiating waves require a finite distance to travel before the
reduction in pressure amplitude follows the 1 / r proportionality characteristic of far field sound radiation.
Recalling the other far field criteria, we must also select the hemisphere radius r to meet

acoustic far=
field kr 2π ( r / λ ) >> 1 and geometric far field r / d >> 1 (5.6.2)

These characteristics may be summarized according to the following definitions of fields that surround any
source of sound [27]. Figure 42 exemplifies the three fields.

Between the sound source surface and about one acoustic wavelengths distant, the hydrodynamic near field
exists. This field is characterized as an area where the acoustic fluid merely "sloshes" back and forth from
one region of the source surface to another, without significant net transport of acoustic energy in the radial
direction away (normal to) from the sound surface. In this field, the acoustic pressure is almost 180° out of
phase with the particle velocity. Acoustic measurements obtained in the hydrodynamic near field do not
have much meaning in terms of sound power delivery, although large SPL may be evaluated.

Figure 42. Fields around a sound source.

Beyond the hydrodynamic near field is the geometric near field. Here, the acoustic pressure and particle
velocity are mostly in-phase but the amplitude of acoustic pressure does not reduce by 1/2 per doubling of
radial distance from the effective sound source center. Also, while interference phenomena will reveal signs
of directive response in the geometric near field, these characteristics will not be fully developed. Thus,
pressure nodes will not be as deep and side lobes will not be as prominent when compared to fully developed
conditions in the far field. Because the acoustic pressure and particle velocity are in-phase, the specific
acoustic impedance in the geometric near field is approximately the plane wave impedance ρ 0 c . Likewise,
this implies that the geometric near field is characterized by the relation r >> d where d is the characteristic
source dimension and r is the radial distance from the effective source center to the field point.

At all distances beyond the geometric near field, we find the far field. The far field is characterized by three
distinguishing traits [27]:

85
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1. The pressure amplitude decreases monotonically in inverse proportion to the distance from the
source center.
2. The directivity properties of the source, by angular variation in pressure amplitude, do not vary for
further increase in the distance from the effective source center. Thus, the directivity is fully
developed.
3. The specific acoustic impedance is equal to the plane wave impedance.

To be in the far field, one must meet the following three radial range criteria. These criteria are not
explicitly related to the distinguishing traits given above.

(a) kr 2π ( r / λ ) >> 1 , "acoustic far field"


=
(b) r / d >> 1 , "geometric far field"
2 rλ
(c) >> 1 , may be termed a "source size factor" because it accounts for the variation in delivery
π dd
of acoustic waves from one region of an effective source to another when the source has
considerable characteristic dimension respecting the acoustic wavelength

It is important to note that more conservative estimates of the far field will entirely drop the constants shown
in criteria 1 and 3, since the important ratios of lengths are the meaningful quantities and not the arbitrary
constants that scale them. Thus, the acoustic far field condition is sometimes reported in practice as r / λ
>>1 while the third criterion is given as r λ / d 2 >>1.

Example: To evaluate the SPL at 1 [kHz], find the radius r of measurement hemisphere around a source
with characteristic dimension d of 250 [mm] such that the SPL measurements will be in the far field.
Repeat the method with a characteristic dimension d of 150 [mm].

Answer: Express the three far field criteria for sources of sound with finite size.

π d 2 π ( 0.25 ) ( 343 / 1000=) 


2
1 λ
r >> max  = = 0.286 = = d 0.25 [m]
0.0546 = (5.6.3)
 2λ 2 ( 343 / 1000 ) k 2π 2π 

In general, a conservative target to satisfy the inequality >> would be 10 times but this may not always be
possible. Sometimes 5 or 6 times the maximum of the above dimensions must be tolerated with the
recognition that severely directional sounds emanating from the source may distort the measurements. In
this case given in (5.6.3), the limit on the selection of the radius r is due to the source size with respect to
the wavelength.

If the source dimension is d of 150 [mm], we have

π d 2 π ( 0.15 ) ( 343 / 1000=) 


2
1 λ
r >> max  = = 0.103 = = d 0.15 [m]
0.0546 = (5.6.4)
 2λ 2 ( 343 / 1000 ) k 2π 2π 

86
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

In this case given in (5.6.4), the limit on the selection of the radius r is due to the geometric far field
criterion.

Figure 43. An arbitrarily-shaped source of sound, with a maximum characteristic dimension of d . A hemisphere around
the source, positioned on a reflecting plane with microphone locations for measuring SPL to determine the radiated sound
power level LΠ . The radius of the hemisphere is r .

Table 11. Semi-free field measurement locations by ANSI SI.34-1980 for SPL evaluation.

Coordinates as multiples of radius r


microphone number x y z
1 -0.99 0 0.15
2 0.50 -0.86 0.15
3 0.50 0.86 0.15
4 -0.45 0.77 0.45
5 -0.45 -0.77 0.45
6 0.89 0 00.45
7 0.33 0.57 0.75
8 -0.66 0 0.75
9 0.33 -0.57 0.75
10 0 0 1

Once the appropriate radius of the measurement hemisphere is determined, the total SPL of the source may
be measured by acquiring the SPL in narrowband or (one-third) octave bands (see Sec. 6.3) over the
hemisphere according to the measurement locations provided in Table 11. The points are determined
according to several standards, including ISO 3745-2012, AS 1217.6-1985, which provide each
measurement location an equal-area coverage for the radiated sound.

Recalling (4.8.6), we now express a similar equation by

LΠ =
SPLave + 20 log10 r + C (5.6.5)

87
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

the constant C was given to be 11 [dB] since the spherical acoustic waves were presumed to radiate into
the free field in all directions. However, if the source of sound is near to reflecting planes, the constant C
is defined differently, as shown in Table 12. When the total SPL measurements are taken over a hemisphere
on a single reflecting plane, the constant is C =8 [dB]. The SPL denoted in (5.6.5) is the area-averaged,
total SPL, whether evaluated in narrowband or (one-third) octave bands, according to

1 N
SPLave = 10 log10
N
∑10
i
SPLi /10
(5.6.6)

Since the points given in Table 11 are selected so as to represent an equal area, there is no need to account
for area in the sum (5.6.6).

Based on the procedure outlined above -- identification of the correct hemisphere size, measure the area-
averaged SPL, and account for reflecting planes -- the test scenario may be conducted in a variety of
locations. A common means to conduct the experiments is to use the outdoors or to use hemi-anechoic
chambers. In both cases, one reflecting plane is present and incoming sources of sound to measurement
microphones are negligible.
Table 12. Measurement considerations for determining sound power level from SPL measurements over hemisphere or
part thereof around a source of sound.

source location constant C [dB]

near plane defined by ( x, y, z ) = ( x, y,0 ) 8

near junction of two planes defined by ( x, y , z ) = ( x,0,0 ) 5

near junction of three planes defined by ( x, y , z ) = ( 0,0,0 ) 2

Example: Determine the sound power [W] radiated by a source at 3 [m] in the 500 [Hz] octave band using
the microphone measurements of SPL are provided in Table 13.
Table 13. Measurements of SPL at microphone locations around a hemisphere of a sound source in the 500 [Hz] octave
band.

microphone number SPL [dB]


1 60
2 60
3 65
4 55
5 61
6 61
7 62
8 55
9 64
10 61

Answer: We use (5.6.6)

88
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1 N
SPLave = 10 log10
N
∑10
i
SPLi /10
=61.3 [dB]

Then, by (5.6.5) and observing that we have C =8 [dB] and r =3 [m], we find the sound power level to be

61.3 + 20log10 3 + 8 =78.9 [dB re 10-12 W]


LΠ =

Then, determining the sound power, we have, from (4.8.3), Π =0.0776 [mW].

5.7 Outdoor sound propagation

Although the amplitude of a spherical acoustic pressure wave reduces by a factor of 1 / r , the outdoor
environment itself introduces additional dissipative effects that increase the rate at which the SPL decays.
Thus, while a doubling of distance reduces SPL by 6 [dB], accounting for the influences of the
environmental absorption of sound will cause a greater SPL reduction. An understanding of these processes
is important so as to not err too greatly on the conservative side of development when determining what the
appropriate distances are that sources of sound in the outdoors must be separated from receivers who may
not want to hear those sources. This is relevant, for instance, in airport development, placement of wind
turbines, civil infrastructure development (e.g. roadways near residential areas), and when designing for
other legislative and regulatory measures of noise exposure.

5.7.1 Attenuation by the atmosphere


These atmospheric processes of sound absorption are modeled as follows. First, we introduce the relaxation
frequencies of oxygen and nitrogen, respectively, as

0.02 + h
24 + ( 4.04 × 104 ) h
f r ,O = (5.7.1.1)
0.391 + h

−1/2  
  T −1/3   
−4.17   −1 
T     T0   
fr,N  
=  9 + 280h ⋅ e
 
 (5.7.1.2)
 T0   
 

where h is the humidity (thus, 50% humidity is 0.5), T0 =293.15 [K] is the reference temperature, T is
the ambient temperature [K]. The relaxation frequencies (5.7.1.1) and (5.7.1.2) have units of [Hz]. Next,
the absorption rate in [dB/100m] is computed from

 1/2
T  
−5/2
 
−11  T  e −2239.1/T e −3352/T
869 × f (1.84 × 10 )   +   0.01275
α= 2
+ 0.1068   (5.7.1.3)
  T0   T0   f r ,O + f 2 / f r ,O f r , N + f 2 / f r , N  

where f is the frequency in [Hz]. Based on the units of the absorption rate, α , the actual absorption in
[dB] induced for sound propagation a distance r in [m] is

Aabs = α r / 100 (5.7.1.4)

89
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

5.7.2 Attenuation by barriers


Barriers provide an additional means to significantly attenuate sound transmission from a source to receiver.
We presume that the barrier is infinitely rigid and long (according to the Figure 44 the length presumed to
be infinitely long is the out-of-page dimension). The sound attenuation provided by barriers is modeled
similarly as for optical diffraction theory, where the Fresnel number is defined to be

N 2( R − d ) / λ
= (5.7.2.1)

The acoustic wavelength is λ and the dimensions R and d are shown in Figure 44. Put in words, R is
the shortest possible diffracted path of the wave from source to receiver (shortest possible path from source
to receiver) while d is the shortest possible path from source to receiver that passes through the barrier.
When N <0, the receiver can see the source. The Fresnel number evaluates the excess distance that the
wave must travel in order for it to surmount the barrier and reach the receiver, with respect to the acoustic
wavelength.

The SPL attenuation associated with the barrier is then computed from

Abar ≈ 10log10 ( 20 N ) (5.7.2.2)

where the units of Abar are [dB]. The fact that attenuation of sound is achieved for small, negative values
of the Fresnel number is because some of the sound is shielded from source to receiver. These developments
were derived and originally examined in Ref. [28].

Figure 44. Barrier between a sound source and a receiver.

5.7.3 Total sound attenuation outdoors


With these relations, we may compute the SPL at a location outdoors based on knowledge of the sound
power of the source a distance r from the receiver using a modification to (4.8.6) such that

90
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

SPL = LΠ − 20 log10 r − 11 − Aabs − Abar (5.7.3.1)

Example: The sound power level of a source in the outdoors is 115 [dB re 10-12 W] at 250 [Hz]. The ambient
temperature is 15 [°C] and the humidity is 30 %. The shortest path that the sound source waves must travel
to overcome a barrier between the source and receiver is 150 [m] while the straight-line path is 100 [m].
Determine the SPL at the receiver at 250 [Hz] (a) with the barrier and (b) without the barrier.

Answer: The shortest possible diffracted path is the same as the distance r . We first need to determine the
absorption associated with the travel of the waves through the atmosphere. Using the values provided, we
have that α =0.1245 [dB/100m]. Consequently, we have that Aabs =0.19 [dB]. In other words, there is
almost no additional atmospheric attenuation of sound due to the travel of the sound at 250 [Hz] across a
150 [m] distance. Next, we compute the attenuation provided by the barrier. The Fresnel number is N
=73.5 (the sound speed at this temperature is approximately 340 [m/s], Appendix A10 [1]). Then the SPL
reduction associated with the barrier is found to be Abar =31.6 [dB]. Obviously, the barrier is far more
effective at attenuating the sound than the atmosphere!

Concluding, using (5.7.3.1), the SPL at the receiver is (a) 28.6 [dB] when the barrier is present and (b) 60.3
[dB] when the barrier is not. Note that the SPL at 10 [m] (with no barrier) is 84 [dB]. Considering 250 [Hz],
this could be typical of highway noise due to road-tire interactions. Thus, a home positioned only 100 [m]
from the roadway, by an equal distance to the barrier as the road is the to the barrier, requires a tall barrier
to attenuate the sound from the passing vehicles. Considering our geometry, we have that
752 = b 2 + 502 → b = 55.9 [m] !

91
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

6 Acoustics instrumentation, measurement, and evaluation

To make use of our developing foundation of acoustics principles in practical applications, we must have a
means to measure acoustic quantities. In particular, we often need to know the acoustic pressure and particle
velocity, whether we are specifically interested in those quantities or if we are to derive other important
measures of sound from them, e.g. sound power level, intensity, etc. For instance, we often need to know
the amplitude, frequency, time, and/or the phase information of acoustic quantities for purposes of

• identifying and locating primary sources of sound or noise


• optimizing materials or treatments to address undesired sound or noise characteristics
• determine the compliance of an environment or object to regulations for noise generation
• quantify the acoustic power emitted by a source
• characterize the acoustic qualities of rooms and how materials or treatments may be positioned to
improve these features
• and many other purposes

To these ends, an understanding of the instrumentation used for acoustic measurements and of the
standardized measurement and data-reporting strategies is needed.

6.1 Microphones

Microphones measure acoustic pressure. Microphones use the motion of circular diaphragms, induced by
pressure changes fore and aft of the surface exposed to the measurement environment, to result in an
electrical signal that corresponds to an amount of relative pressure change in the environment.

Based on the extremely large range of acoustic pressures that are encountered in the variety of day-to-day
and engineering contexts, Table 6, it is challenging to develop a device which responds linearly to many
orders of magnitude in pressure change across the entire band of acoustically-relevant frequencies 20 [Hz]
to 20 [kHz]. Thus, microphones for scientific, laboratory, or regulatory purposes tend to be manufactured
with exacting tolerances at high cost, while the inexpensive microphones used for everyday applications
(e.g. the microphone in a cellphone) tend to be of substantially inferior quality in terms of permitting linear
response to a broad range of frequencies and a broad range of amplitudes.

92
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 45. Collection of condenser microphones designed and manufactured by Brüel & Kjær of Denmark [29].

Most microphones used in everyday and laboratory applications are condenser microphones, Figure 45.
Condenser microphones consist of a delicate, tensioned metallic diaphragm secured above a metal
"backplate" and protected by an outer metal housing, Figure 46. There is a charge built up in between the
diaphragm and backplate due to a microphone preamplifier unit or pre-polarization of the diaphragm film.
Electret microphones are pre-polarized to reduce complexity of the system. Due to the charge and the small
gap between the diaphragm and the backplate, a capacitance is generated. Acoustic pressures cause the
diaphragm to bend, and therefore changes the capacitance because the distance between diaphragm and the
backplate changes. With the preamplification, this capacitance change is recognized as a voltage potential
difference. Through a calibration, this voltage generation is then related to the corresponding acoustic
pressure change: thereby providing measurement. Condenser microphones include vent holes, Figure 47,
which are small channels to interface the ambient environment to the interior volume of the condenser
microphone via the microphone side or back/internals. Vented microphones help to eliminate the influences
of changing atmospheric pressure on the measurement and prevent excess diaphragm motion due to the
inherent air stiffness represented by compression of air inside of the microphone interior volume.

Figure 46. Cross-section view of condenser microphone and exploded view [29].

93
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 47. Vent holes on the (a) side and (b) back of a condenser microphone [29].

6.1.1 Characteristics of microphones


Microphones are specified in terms of several performance characteristics: sensitivity, frequency response,
dynamic range, and type.

6.1.1.1 Sensitivity
The microphone is an electromechanical transducer that converts the acoustic pressure into voltage and as
such it possesses a sensitivity that defines the effectiveness of the energy conversion from acoustic to
mechanical and finally to electrical. Sensitivity may be represented as [V/Pa] or [mV/Pa] but it is often
reported in specification sheets as [dB re: 1 V/Pa]

sensitivity [dB re: 1 V/Pa] = 20 log10 sensitivity[mV/Pa] (6.1.1.1.1)


1000[mV/Pa]

94
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 48 shows a specification sheet for a PCB 130E20 electret condenser microphone. The sensitivity is
reported to be 45 [mV/Pa] and -26.9 [dB re 1 V/Pa]. Using (4.8.2), a quick computation shows that these
are the same metric but reported in the two different ways

45
20log10 ≈ −26.9357 [dB re 1 V/Pa]
1000

Practically, speaking in the utilization of the microphone, one acquires voltage signals from the microphone
preamplifier which are then converted through the sensitivity [mV/Pa] to the corresponding acoustic
pressures being measured.

A microphone purchased from a professional manufacturer will often include a data card that reports,
among other information, a nominal sensitivity and a manufacturer-calibrated sensitivity.

• When using a single microphone in a test, it is often accepted to use the specification sheet
sensitivity value. These sensitivity values are nominal in the sense that the manufacturer has
determined the value to be the average sensitivity of such microphone units produced given quality
control.
• When using multiple microphones in a test, the manufacturer-calibrated sensitivity for each
respective microphone unit must be used in converting voltage readings to acoustic pressure, rather
than the sensitivity reported on the specification sheets. Each microphone will have a true
sensitivity that slightly deviates from the nominal value reported on the specification sheet. When
multiple microphones are used, these small deviations result in significant differences in the
determination of SPL and other acoustic metrics. Thus the manufacturer-calibrated (or in-house-
calibrated) sensitivity values must be used. Microphone calibration in-house is straightforward to

95
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

perform and calibrators are oftentimes available for purchase from the microphone manufacturer
at around the cost of the average, high-quality microphone.

Figure 48. Specification sheet for PCB 130E20 microphone.

6.1.1.2 Frequency response


An ideal microphone can measure acoustic pressure with uniform sensitivity over the entire bandwidth of
frequencies across which the device is intended to be used for measurement. In practice, the frequency
response is not perfectly "flat"; instead, small variation in the sensitivity occurs, particularly at the lowest
and highest frequencies for which the microphone is suitable for measurement. Figure 49 shows the
frequency response of the PCB 130E20 whose specification sheet is given in Figure 48. The measurements
are shown to be susceptible to about ±1 [dB] deviation from 20 [Hz] to 10 [kHz]. For all intents and
purposes, many acoustic sources naturally vary on the order of 1 [dB] even if operating at a "steady-state"
and, as such, this minor pressure measurement deviation can be neglected. The quantified frequency
response deviation is given in the specification sheet as well, Figure 48.

96
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 49. Frequency response of PCB 130E20 microphone.

6.1.1.3 Dynamic range


The difference between the highest and lowest measureable SPLs of the microphone is referred to as its
dynamic range. Based on the many orders of magnitude of SPL characteristic of common acoustic sources
with respect to one's anticipated measurement environment, the microphone for an application needs to be
appropriately selected. The dynamic range of the PCB 130E20 is shown in Figure 48 to be >122 [dB], with
an inherent noise limit of about 30 [dB]. Thus, the PCB 130E20 can measure acoustic pressures of
amplitudes 894 [μPa] to 1130 [Pa]: around six orders of magnitude! Putting this in another perspective
closer to our intuition, this huge dynamic range is similar to a meter stick being machined with micrometer
increments! (Note that the human hair is about 100 [μm] in diameter).

Microphones used in laboratory measurements tend to have this significant extent of dynamic range while
microphones used in everyday applications (e.g. in a cellphone) have substantially more limited range. The
common Panasonic WM-55A103 electret microphone has a dynamic range of about 60 [dB]. The limited
range of everyday-use microphones is why such microphones "distort" when spoken into too loudly. These
low-cost microphones are tuned for specific engineering purposes, such as speaking into a cellphone which
is often done at levels from around 40 to 90 [dB] within a fairly narrow frequency band of the human voice
around 100 to 1000 [kHz].

6.1.1.4 Microphone types


Microphones are intended for pressure-field, free-field, and diffuse-field (sometimes termed random
incidence) measurements. Figure 50 depicts the various microphone types according to the utilization in
various acoustic fields. For a given measurement, the microphone type is selected according to its influence
upon the measured pressure. Consider that we are trying to measure a property of the acoustic field without

97
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

influencing the field variable. But, necessarily, by putting our instrumentation into the field at the
measurement point, we cause the field to change! Therefore, the microphone types each represent design
characteristics and operational sensitivities optimized for particular measurement scenarios. The practical,
hardware differences among microphone types come about according to the protective cover design,
internal volume design, venting, and mounting for the moving diaphragm.

Figure 50. Microphone types [30].

Thus, pressure-field microphones are optimized (i.e. diaphragm material selection, size, etc) to measure
acoustic pressures when the pressure is uniform over the area of the diaphragm, see the middle panel of
Figure 50. This situation occurs in many laboratory settings such as during impedance tube tests to examine
material acoustic wave absorption, and in the evaluation of hearing aids and telephonic equipment.
Typically pressure-field microphones are used in a way that mounts the housing flush in a wall so that the

98
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

diaphragm effectively becomes an extended surface of the wall and measures precisely the same acoustic
pressure as that measured over the whole wall surface.

Free-field microphones are used in the "free" acoustic field and have as flat of a frequency response as
possible to normal incidence sound fields. "Flat frequency response" indicates that the sensitivity of the
measurement is not substantially influenced over the range of frequencies at which the microphone is
usable. Thus, acoustic waves impinging in the direction parallel to the axis of the microphone length will
equally induced voltages in the device across the range of measurable frequencies, see the top panel of
Figure 50. If free-field microphones are to be used to record the sound pressure from broadband frequency
sources with strong directionality (i.e. not in echoic or reverberant sound fields), then the free-field
correction must be applied to the measurement, Figure 51. Thus, for a microphone that is oriented facing
away from a directional source to-be-measured, then the upper-most curve in Figure 51 must be applied to
the measurement taken with that associated microphone. At frequencies around 20 [kHz], about +8 [dB] of
correction is needed.

Figure 51. Representative free-field corrections for a microphone from 0° (lower) to 180° incidence (upper). The bold curve
is the computed diffuse-field equivalent correction [29].

Diffuse-field or random incidence microphones are designed to exhibit flat frequency response for sound
arriving at any angle, which is called a diffuse sound field, see the bottom panel of Figure 50. One does not
necessarily need to purchase a specialized diffuse-field microphone, however, to measure sound fields of
this nature because free-field microphones may be reported with diffuse-field corrections that account for
the sound reception sensitivities of all angles. The diffuse-field correction is shown in Figure 51 as the bold
curve around the center of the plot.

99
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

6.1.2 Selecting a microphone for the measurement


A factor of first importance in selecting a microphone is the intended measurement environment. For
example, if the measurements will consist of only duct-based measurements of acoustic pressure
propagation (with flush-mounted microphones), then pressure-field microphones are recommended. If the
microphone will always be used in reverberant environments (that lead to diffuse sound fields), then the
diffuse-field microphone should be selected. For acoustic measurements taken away from reflecting
surfaces, in anechoic chambers, or outdoors, then free-field microphones will serve well. A good rule-of-
thumb is that free-field microphones are suitable general-purpose microphones since the design and
calibration factors makes free-field mics well-suited to obtain high-quality measurements in many
environments.

Microphones come in different sizes, typically 1, 1/2, 1/4, and 1/8 [inch] diameter diaphragms for
laboratory-grade microphones, and in any variety of other smaller sizes for everyday-use microphones. The
different sizes result in different frequency response, dynamic range, and sensitivity, Figure 52. The
measurement application needs to be carefully considered in order to find an appropriate balance among
these characteristics for what one needs to measure.

Particular trends evident in Figure 52 to differentiate the microphone sizes:

• Microphones with large diameter diaphragms permit measurement of very low frequencies and
very low levels of sound.
• Small diameter microphone diaphragms enable ultrasonic measurements (sounds above human
hearing 20 [kHz]) but have less dynamic range.

Figure 52. Relations between the frequency response (top) and dynamic range (bottom) for various diameter sizes of
laboratory-grade condenser microphones [29]. Note that the axis of the top panel should be [Hz] at the left-most range and
[kHz] at the right-most range. Note that the bottom plot presents [dBA] such that the dynamic range is the extent of [dBA]
across which the bar spans.

100
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

6.2 Sound level meters

Although sound level meters (sometimes termed "sound meters") contain a microphone, sound meters do
not output a signal proportional to acoustic pressure. Instead, sound level meters contain a signal processor
that integrates the sound pressure level over a duration of time to compute the SPL of an environment.
Sound level meters are available in classes that correspond to the quality and accuracy of a measurement.
Standards are defined that must be met for a sound level meter to be categorized among the classes, shown
at the right-most portion of the list below. Recall that most acoustic sources often vary by 1 or 2 [dB] even
when operating in a "steady-state".

• Class 0: Calibration purpose only ($$$$$), ± 0.4 [dB]


• Class 1: Precision sound level meter, intended for laboratory and field use where high fidelity
measurements are required ($$$$), ± 0.7 [dB]
• Class 2: General purpose sound level meter, sometimes may record the time-varying SPL for later
processing ($$$), ± 1.0 [dB]
• Class 3: Survey meters, useful for approximate measurements of overall SPL, particularly when
the acoustic environment is highly varying such that a precision measurement does not make sense.
These meters can be purchased on Amazon typically for ($$) and have been reported as being as
consistent with Class 1 meters to within 1 or 2 [dB]

Most sound level meters have LCD or LED screens that enable real-time read-out of the SPL. In particular,
this is useful for the Class 3 meters used for quick assessments of the overall SPL of an environment and
whether or not certain noise control measures have been effective.

Sound level meters report sound pressure level on the display window or store it temporarily in memory.
The time-averaged sound pressure level SPL output by a sound level meter is determined according to

1 T 
SPL (T ) = 10log10  ∫ 10 SPL( t ) /10 dt  (6.2.1)
T 0 

where SPL ( t ) is the total SPL, in other words as determined from a total prms pressure of the incoherent
frequencies evaluated, typically from 20 [Hz] to 20 [kHz] or a smaller band. Of course, to evaluate SPL ( t )

requires a finite time to elapse in order to compute an autospectrum or fast Fourier transform of the pressure
time series. The time duration T is sometimes an adjustable value on the sound level meter, so as to
compare, for example, between "fast" and "slow" durations of evaluation. Fast measurements are often an
integration over the course of 1 [s], while slow may be 2, 3, or more [s]. Due to common fluctuations in the
sound levels in most environments, the slow evaluations tend to result in more consistent average sound
pressure levels for usefully characterizing noise exposure, whereas fast evaluations uncover sudden surges
in SPL that could be harmful.

In addition to the average sound pressure level, two other common sound levels to compute are SPLx where
x is 10 or 90. These metrics indicate the percentage x of time that the sound pressure level exceeded the

101
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

reported amount. Thus, an SPL10 =94 [dB] indicates that the SPL was greater than 94 [dB] for 10% of the
measurement, which is an indicator of the maximum SPL within the average. Likewise, an SPL90 =70 [dB]
indicates that the sound level was greater than 70 [dB] for 90% of the time, which is effectively a way to
characterize background levels. Note that in general due to inherent variation and fluctuation of the sound
levels, the SPL contains levels greater and less than the average that is reported. Thus, in tandem with
information provided by SPL10 and SPL90 , one is able to better understand the overall SPL of an
environment and address the potential issues associated with the distribution.

6.3 Frequency bands

Sound level meters report the average SPL as a single number in [dB]. But this does not tell us anything
about the frequency content, for instance if a meter reads SPL =88 [dB] it does not indicate whether it is a
boomy bass at low frequencies, or a piercing squeal at high frequencies, or a random combination of
acoustic energy commonly referred to as noise. On the other hand, reporting frequency data at individual
spectral lines of 1 [Hz] width is ineffective since very few (if any) acoustic sources emit single-frequency
wave energy.

Thus, it is the convention to use frequency bands to report and assess sound levels. The bands are based
around octaves. An octave indicates a doubling of frequency. For instance, the frequency "one octave up"
from 150 [Hz] is 300 [Hz].

Octave bands: The international standard octave band center frequencies are f c ={31.25, 62.5, 125, 250,

500, 1000, 2000, 4000, 8000, 16000} [Hz], so that f ci+1 = 2 f ci . Note that these frequencies may be
generated via=
f c 1000 × 2 s [Hz] where s =[-5,4]. The lower and upper frequencies around which the
octave band measurements are obtained are
−1/2
lower: f=
li f ci × 2 [Hz] (6.3.1)

1/2
upper: f=
ui f ci × 2 [Hz] (6.3.2)

One-third octave bands: Providing even greater frequency specification than the octave bands, the one-
s
f c 3i 1000 × 2 [Hz] where s =[-15,12]/3 and the
third octave band center frequencies are computed from =
lower and upper frequencies are computed using
−1/6
lower: f=
l 3i f c 3i × 2 [Hz] (6.3.3)

1/6
upper: f=
u 3i f c 3i × 2 [Hz] (6.3.4)

102
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

6.3.1 Using octave and one-third octave bands in acoustic measurements


Where sound pressure levels (or other acoustic measures) are reported in octave or one-third octave band
measurements, the SPL is determined over the band from the lower to the upper frequencies corresponding
to the band center frequency. Then, the data is reported as the SPL corresponding to that band frequency
center. In other words, first either the total RMS pressure (6.3.1.1) of the incoherent frequencies in the band
is determined or the total SPL (6.3.1.2) is directly computed via
fui

p 2
rmsci = ∑ prms
2
i, j
( fj) (6.3.1.1)
fli

fui
( )
SPL = 10log10 ∑10
SPLi , j f j /10
(6.3.1.2)
fli

The above equations are not mathematically correct in the summation limit notations, but indicate that the
RMS or SPL of a range of incoherent, individual "spectral lines" from measurements must be added from
the lower to upper frequencies corresponding to the band center frequency. Whether computing the total
SPL from individual SPL measurements or from the total RMS pressure, the final result for the band (192)
is then reported as the SPL for that band.

• Important! Note that (6.3.1.1) and (6.3.1.2) are sums and are not averages, such as was used in
measuring the average sound power (5.6.6). The distinction is because (6.3.1.1) and (6.3.1.2) must
evaluate the net sound power representing a point location and not a mean level occurring in a
distributed space.

In general, the total SPL of a sum of N number of individual frequency or band components of SPL Li ,
where i is the ith component, is determined from
N
SPL = 10 log10 ∑10 SPLi /10 (6.3.1.3)
i

Similarly, the total RMS squared pressure follows from


N
2
prms = ∑ prms
2
,i
(6.3.1.4)
i

Example: Consider a narrowband measurement of sound pressure level shown as the solid grey curve in
Figure 53. The measurement is taken of background noise in a lab room within Scott Lab on the campus of
OSU. The one-third and octave band measures of the background noise SPL are shown as red circles and
blue squares, respectively. Use the plotted data to approximate the overall SPL that would be measured by
a sound level meter. Identify at what frequencies this sound energy is mostly consolidated.

103
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

70
sound pressure level [dB re 20uPa]

60 narrowband
one-third octave band
50
octave band
40

30

20

10

0
20 100 1,000 10,000 20,000
frequency [Hz]

Figure 53. Scott Lab background noise in a lab space.

Answer: We can determine the total SPL using (6.3.1.3), which applies whether we are considering (one-
third) octave bands or the entire frequency spectrum. Thus, for simplicity we use the octave band values of
the Figure 53. From 31.25 [Hz] to 16 [kHz], the octave band SPLs are approximately {64, 59, 42, 45, 37,
36, 36, 38, 40, 43} [dB]. Using (6.3.1.2), we find that SPL =65.3 [dB]. Due to the logarithmic scales dealt
with in this assessment, we can eliminate from our sum any SPL values that are about 15 [dB] less than the
peak, and still arrive at almost the same overall SPL. For instance, just considering {64, 59} [dB] as the
only two measurements (at 31.25 and 62.5 [Hz] octave bands, respectively) still leads to a SPL =65.2 [dB].
Practically speaking, this indicates that the noise is largely dominated by the low frequencies of about 100
[Hz] and less. Specifically, because this was a measurement in a room of Scott Lab, this noise is mostly
due to sound from the HVAC exhaust duct. This example emphasizes the importance of evaluating overall
SPL and band-ed measures of SPL so as to understand the general sound levels that people are exposed to
as well as the predominant frequencies involved in making up the acoustic field. Here, although it is at a
higher level, the low frequency noise may be tolerable, for reasons described in the following section on
weighting networks.

6.4 Weighting networks

For a given sound pressure level, the human ear is not equally sensitive to acoustic pressure across the
acoustic frequency range. Thus, if the SPL is kept the same and the frequency of an acoustic wave is
changed, then a human subject receiving that sound will report that the amplitude is also changing! We will
review human hearing sensitivities in Sec. 8, but for now we illustrate a particularly important sensitivity
of human hearing to sound level that guides a large variety of acoustical engineering methods and decision-
making processes.

Figure 54 illustrates the issue that for the same loudness level line, the actual sound pressure level must
change as a function of frequency. In other words, for a perceived loudness of 70 [dB], an acoustic source
at 50 [Hz] must actually be about 15 [dB] greater in SPL than a 1 [kHz] tone in order to result in the same
subjective level of loudness.

104
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 54. Equal perceived loudness profiles [31].

The strongly nonlinear trends shown for subjective loudness in Figure 54 are due to the unique
biomechanical and biochemical composition and operation of the human ear. The significant amplitude
dependence on subjective loudness is summarized in Table 14. For a given change in SPL, the acoustic
power changes are provided as well as the subjective change the loudness. To meaningfully change the
perceived loudness due to an acoustic source, typically a ± 10 [dB] change in SPL is required. But note that
this is a 1/10 or 10 times change in the radiated sound power!

105
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

So, for example in a noise control application, if a customer must considerably appreciate the outcome of
a new noise control treatment or material so as to report the noise is half as loud as previously, one must
deliver performance that reduces the original radiated wave energy by 10 times. Since sound is often a
consequence of vibration, reducing vibration by an order of magnitude for a nuisance engineering
component is a tall order. It should be evident from this example that acoustical engineering can be a
challenging practice due to the harsh "reduction" of perceived effectiveness due to the human ear! On the
other hand, this is why acousticians, acoustical engineers and consultants are paid so well.
Table 14. Subjective effect of changes in sound pressure level [13].

Change in sound power

Change in SPL [dB] Decrease Increase Change in apparent loudness

±3 1/2 2 Barely perceptible

±5 1/3 3 Clearly noticeable

± 10 1/10 10 Half or twice as loud

± 20 1/100 100 Much quieter or louder

To accommodate the sensitivities of the human ear, many acoustic measurements are reported with a
weighting. There are two commonly used weighting networks, A- and C-weighting. When reporting SPL
measurements, the notation is to refer to [dBA] or [dBC] if the measurement has been suitably weighted.
Table 15 provides the A- and C-weighting correction values. Thus, an SPL measurement of 78.1 [dB] with
a center frequency of 160 [Hz] would be reported as 78.1-13.4 = 64.7 [dBA] or 78.1-0.1 = 78.0 [dBC].

A-weighting is used commonly because it correlates reasonably well with the actual ear sensitivity
pertaining to noise-induced hearing damage associated with low/moderate sound levels [13]. As a result,
A-weighting is often the means by which noise regulations and standards are set. With a similar approach
as the A-weighting, the C-weighting factors are determined for significantly greater overall subjective
loudness, relevant to noise-induced hearing damage that could occur in consequence to a shorter duration
of exposure. Thus, as observed via Table 15, for greater overall noise levels that are characteristic of the C-
weighting factors, the human ear loses some of its frequency sensitivity since the weighting factors are
significantly reduced. Considering Figure 54, at higher SPL, the curves are not as varied in terms of the
perceived loudness, which is why there is a smaller amount of variation in the C-weighting factors shown
in Table 15 than the corresponding A-weighting factors.

If one wishes to determine a narrowband weighted SPL measurement, then one may use the Table 15 factors
and interpolate between the defined values to apply the correct weighting to each individual frequency. On
the other hand, such a practice is rare since wider band measures of SPL are far more meaningful in terms
of human perceived loudness. Although one may generate narrowband A- or C-weighted [dBA/C] measures
for a measure of acoustic pressure, such narrowband weighted [dBA/C] values should not thereafter be

106
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

summed in octave or one-third octave bands. In other words, the weighting of [dB] to [dBA/C] comes only
after the final band value is obtained, whether that it narrowband, one-third octave band, or octave band.

Table 15. A- and C- weighting correction values. Center A-weighting C-weighting


frequency [Hz] correction [dB] correction [dB]
Center A-weighting C-weighting
frequency [Hz] correction [dB] correction [dB] 500 -3.2 0

10 -70.4 -14.3 630 -1.9 0

12.5 -63.4 -11.2 800 -0.8 0

16 -56.7 -8.5 1000 0 0

20 -50.5 -6.2 1250 0.6 0

25 -44.7 -4.4 1600 1.0 -0.1

31.25 -39.4 -3.0 2000 1.2 -0.2

40 -34.6 -2.0 2500 1.3 -0.3

50 -30.2 -1.3 3150 1.2 -0.5

62.5 -26.2 -0.8 4000 1.0 -0.8

80 -22.5 -0.5 5000 0.5 -1.3

100 -19.1 -0.3 6300 -0.1 -2.0

125 -16.1 -0.2 8000 -1.1 -3.0

160 -13.4 -0.1 10000 -2.5 -4.4

200 -10.9 0 12500 -4.3 -6.2

250 -8.6 0 16000 -6.6 -8.5

315 -6.6 0 20000 -9.3 -11.2

400 -4.8 0

Example: Compute the SPL in [dBA] for the previous example regarding noise in Scott Lab, see Figure
53.

Answer: Recall that from 31.25 [Hz] to 16 [kHz], the octave band SPLs are approximately {64, 59, 42, 45,
37, 36, 36, 38, 40, 43} [dB]. Using the weighting network from Table 15, the values correspond to {24.6,
32.8, 25.9, 36.4, 33.8, 36, 37.2, 39, 38.9, 36.4} [dBA]. Therefore, using (230), we find that SPL =45.9
[dBA]. Due to the substantially reduced sensitivity of acoustic pressure for the lower frequencies involved,
the perceived SPL (using the [dBA] assessment) is not as significant as were it reported in [dB] without
weighting.

107
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

6.5 Locations to accurately measure sounds

Sound is measured in all locations. But to measure sounds for scientific purposes, product development,
regulatory assessments, or human subjective testing, to name a few reasons, it is common to use
environments that lack echoes. These are called anechoic environments (literally meaning free from echo)
that providing reflection-free acoustic fields in the frequency range of interest.

Anechoic chambers are used for accurate measurements of acoustic pressure. Anechoic chambers come in
all shapes and sizes, Figure 55. These chambers typically have thick walls, sometimes several inches to feet
deep and filled with a heavy mass, oftentimes sand, which inhibits sound transmission. The interior of the
chamber is lined with anechoic (or acoustic) foam which is an open-cell poroelastic material that traps
impinging pressure waves within a seemingly-never-ending network of cellular channels. The acoustic
pressure waves are therefore dissipated via turbulent and resistive effects as the waves get "lost" within the
foam. This approach is able to fully attenuate sound waves at frequencies down to approximately 100s of
[Hz] so long as the acoustic foam is sufficiently deep. Foam is usually shaped in a pyramidal wedge shape,
and adjacent wedges are rotated 90° relative to each other, to provide as much surface area as possible to
absorb the pressure waves.

Hemi-anechoic chambers include one rigid surface, typically the floor, while all other surfaces are foam-
lined to absorb sound waves. The merit to the reflecting surface is described in Sec. 5.5 and 5.6. Genuine
anechoic chambers have suspended floors and acoustic foam below the floor, see the middle and bottom
left images of Figure 55 for examples of such chambers. (It is unlikely that the F-16 aircraft is likewise
suspended, but that a rigid floor is probably covered in acoustic wedges to minimize reflections of sound
from the floor supporting the aircraft). Hemi-anechoic chambers used for automotive study typically have
significant ventilation systems built in so that running vehicles do not emit fumes into the chamber. This
involves tail-pipe mount tubes that have exhaust fans running from outside of the chamber to directly pull
the sooty, hot exhaust out of the chamber. Nevertheless, typically chambers used for automotive
applications often age at a more rapid rate than others since other exhaust soot escapes through various
points from the engine and along driveline.

In general, an anechoic chamber needs to be anechoic only across the frequency range that one anticipates
to measure. Conventional band-pass filtering can be applied to measurements to eliminate the frequency
range across which the chamber is not anechoic [32].

The conservative rule of thumb is that measurements in an [hemi]-anechoic chamber should be one meter
away from the absorptive walls of the chamber to minimize reflections that may not be anticipated from a
genuine anechoic surface. On the other hand, this rule is not very strict and experience will ultimately be
the best guide as to "how close" one can obtain reliable pressure measurements near to the absorptive
surfaces of an [hemi]-anechoic chamber [27].

Anechoic chambers are oftentimes significant resources for research and development. The significance is
in terms of the up-front cost of building and outfitting the chamber as well as securing large space. The
significance of the outcome are high-quality measurements of sound sources in the acoustic frequency band.

108
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 55. Anechoic chambers of various sizes. The chamber with the car (bottom right) and the foldable chamber (top left)
are hemi-anechoic due to the reflective floor. In the chamber with the person playing the trombone (bottom left), the yellow
foam strips cover hard, reflective mounts that hold suspensions and microphones. Image sources https://acoustics.byu.edu/,
https://lotusproactive.wordpress.com, https://commons.wikimedia.org/wiki/File:40th_Flight_Test_Squadron_F-
16_Fighting_Falcon_sits_in_the_anechoic_chamber.jpg,
https://upload.wikimedia.org/wikipedia/commons/3/3d/Measuring_a_diffuser_in_an_anechoic_chamber.png

An effective, and very often used, replica of the hemi-anechoic chamber is an outdoor setting on a hard
reflective surface, such as a parking lot, tarmac, or similar. Of course, if one makes a sound on these
surfaces, there are no reflected acoustic waves from the environment: acoustic pressure waves only travel
spherically away from a source mounted on the reflective plane of the parking lot, tarmac, etc. A drawback
to this approach is that tests need to be conducted when there is little (or no) other noise arriving to the
measurement location due to additional sources, which may be foot or vehicle traffic, aircraft overhead,
and sounds from nature.

Measurements of sound outdoors may also be influenced by the environment in other ways: temperature,
Doppler shift, and wind causing turbulence over microphones that interfere with the accurate assessment

109
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 56. Microphones with different types of windscreens. The microphone at right has a built-in windscreen with
a protective cover on the outside. Image sources Rode WS7 Deluxe Wind Screen Pop Filter for NTG3, GLS Audio
Mic Windscreens, http://www.dpamicrophones.com/microphones/dfacto

of the pressure. The latter may be addressed by using wind screens although these are not uniformly
effective at eliminating measurement bias due to the turbulence effects, Figure 56. Manufacturers of
professional windscreens provide specification data regarding the performance skew induced by the
presence of the material.

Example: The interior dimensions, i.e. usable space, of a hemi-anechoic chamber are 4 [m] by 5 [m] in
wall dimensions and 3 [m] in ceiling dimension. What is the largest size source of sound that permits
measurement of far field source characteristics within the chamber at 1 [kHz]? Can any far field
measurements be obtained for frequencies 100 [Hz] and below? Use a conservative estimate for the
nearness at which measurements may be taken to the walls.

2 rλ
Answers: The far field conditions
= kr 2π ( r / λ ) >> 1 , r / d >> 1 , and >> 1 must be met for the
πdd
measurement in the hemi-anechoic chamber to be in the far field. For a source positioned in the center of
the room on the acoustically-reflective floor of the hemi-anechoic chamber, the maximum dimension to a
wall would be about 2 [m] from the source on the floor to one of the chamber walls. Thus, considering the
>> to require at least 6 times to be roughly satisfied, this means that the source size cannot exceed 2/6=0.333
[m] for the geometric far field condition. Comparatively, the 1 [kHz] tones is very much in the far field
2rλ
kr ≈ 36, while the third requirement is only ≈ 3.93 if the source dimension is 0.333 [m]. Thus, using
πdd
2rλ
this last condition = 6 , the largest source size can be only d =0.270 [m] in characteristic dimension
πdd
for measurements in the chamber at 1 [kHz] to be in the far field. No measurements at 100 [Hz] or lower
frequencies can be taken in the chamber that qualify as being in the far field since the acoustic far field
condition is kr ≈ 3.66 at 100 [Hz], and is thus smaller still at lower frequencies since the wavelength is
larger under those circumstances.

110
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

7 Acoustics in and between rooms

The acoustic environments of rooms are important for numerous applications. "Room acoustics" helps to
establish speech intelligibility, corresponds to the effectiveness of a work environment, promotes physical
and psychological recovery and healing, enhances the enjoyment of recreational experiences, to name only
a few examples. This section will introduce means to assess the acoustic qualities of rooms, particularly
pertaining to mid to high frequencies, relevant to speech, music, and many everyday sounds. One notable
omission in this section is a study of acoustic modes which pertain to the natural frequencies of enclosures,
relevant at low frequencies (for sound in air). Those interested in extending the analysis of room acoustics
to low frequencies should consider taking the OSU course ME 8260 "Advanced Engineering Acoustics".

A common goal of "room acoustics" or "architectural acoustics" is to predict the SPL at a location in the
room given the knowledge of sound sources in the room, or external to the room, and knowledge about the
acoustical qualities of materials, surfaces, and relatively-small volumes that make up the total room surface
area, Figure 57. The "acoustic treatments" illustrated in Figure 57 can be anything from genuine acoustic
panels, to furniture, to people, and effectively any object in a room that is acoustically reflective and/or
absorptive. Almost all objects absorb and reflect sound waves to different extents, and this balance of sound
energy absorption and sound energy reflection is the foundation upon which one designs and tailors the
acoustical characteristics of rooms.

Figure 57. General context of room acoustics, seeking to identify the SPL at a location in the room given knowledge of
sources and acoustical qualities of room materials. Sound fields are made up of reverberant and direct transmissions.

In general, the acoustic field evaluated within rooms may be decomposed to the direct and reverberant
fields, Figure 57. To characterize the acoustical properties of rooms in order to develop the appropriate total
sound field for the context (e.g. concert, kitchen, classroom, office, ...), we must understand the techniques
involved to analyze these fields.

7.1 The transient sound field in a room

While some surfaces in a room may be absorptive, the individual room model is first presumed to be a rigid,
reflective enclosure. Thus sound energy input into the room does not escape, which contrasts with our focus
on free field propagation of acoustic waves in prior sections. Yet, a source of sound in a room turned on at
some initial time does not (ordinarily) result in a never-ending increase in the acoustic pressure amplitude
because a balance is struck between the sound energy input by the source and the dissipation mechanisms
provided by the absorption of the air and of the room surfaces (e.g. carpet, walls, etc.). As described in Sec.
6.5, rooms that are substantially absorptive of sound energy are termed anechoic because these spaces lack

111
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

echoes. The contrast to anechoic rooms are reverberant rooms where the acoustic fields generated by
transient sounds persist for a long time due to numerous reflections off of walls/surfaces without significant
absorption upon each reflection. Anechoic and reverberant rooms are both used for experimental
environments when characterizing the acoustic properties of systems because such spaces help to
approximate ideal limiting case environments, while rooms with an intermediate level of sound absorption
and reflection in the frequency band of interest tend to be adverse to conclusively measuring acoustical
properties. Yet, such enclosures with intermediate levels of sound absorption and reflection represent the
environments that we encounter on a day-to-day basis and are deserving of our attention.

Once a sound source is turned on the room, a balance is made between energy in via the source and the
energy dissipated via the mechanisms described above.

Consider the schematic of Figure 58. We term Ε as the average acoustic energy density in the reverberant
sound field of the room: SI units [J/m3]. In other words, the sound field is diffuse meaning that the energy
density at all locations is the same. In practice this is true of the reverberant sound field, which will be
clearly differentiated from the direct sound field later in this section. The diffusivity of the sound field
means that the flow of energy is equally likely to occur in every direction from a volume element. Given
the volume element dV , the total energy in the element is therefore ΕdV .

Then, consider an area element of the room boundary denoted by ∆S . Sound energy will leave the volume
element and will be distributed over an area, a radial distance r from the element, resulting in an acoustic
intensity of ΕdV / 4π r 2 . The amount of this acoustic energy that strikes the surface ∆S is the projection
normal to the surface

( ΕdV / 4π r ) cosθ∆S
2
(7.1.1)

Now, we define dV as contributing to an element of a hemisphere of thickness ∆r and radius r that


surrounds the surface element ∆S , as shown in Figure 58 at right. The acoustic energy ∆E transmitted to
the area ∆S is determined by assuming it is equally likely that the diffuse field energy arrives from any
direction onto the surface ∆S . Therefore, integrating (7.1.1) over the hemisphere having differential
volume dV = ( r sin θ dφ ) dr ( rdθ ) and expressing the result in a differential time dt = dr / c , gives

c π / 2 2π π / 2 2π
Ε cos θ Ε cos θ 2 dE Ε cos θ
= ∫
hemisphere
4π r 2
dV ∫
r 0=
=
∫ ∫ =
φ 0 4π r
θ 0=
2
r sin θ drdθ dφ →
dt
c∫ ∫
=θ 0=
φ 0 4π
sin θ dθ dφ (7.1.2)

where c is the distance that the acoustic waves travel in a unit of time, via r= ct= c (1=) c . Evaluating
(7.1.2) yields

dE Εc
I
= = (7.1.3)
dt 4

112
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

The units of (7.1.3) are [J/m2s], which, upon substitution of SI base units, agrees with the units of intensity
I such as according to the definition in (4.6.1.2).
Let's contrast the result (7.1.3) with the intensity of plane waves. For acoustic plane waves at normal
incidence to a surface ∆S , the intensity is equal to the acoustic energy density multiplied by the radial
distance travelled in a unit time: Εc . Consequently, we find that the intensity (sound power per area) for
the diffuse sound field (7.1.3) is one-fourth that which would be realized for acoustic plane waves striking
a surface at normal incidence.

Figure 58. Notation used for derivation of transient sound field in room.

Now, we introduce the total sound absorption of the room to be A such that the rate of energy absorbed
by the room is AΕc / 4 , units [J/s]. The units of the absorption A are [m2] and further details on absorption
are provided in Sec. 7.2. Sound energy increases within the total room volume V at a rate, by definition,

given by V ( d Ε / dt ) . We could term this the sound "build-up". Using a conservation of energy, the total
rate of energy flow into the room, the sound power Π , must be equal to the build-up of energy and the
energy absorbed:

d Ε Ac
V + Ε=Π (7.1.4)
dt 4

An omission to (7.1.4) are the dissipative effects of the air itself that otherwise diminish the sound power
in the room. Eq. (7.1.4) is an ordinary differential equation which can be easily solved, such as by a Laplace

transform approach. Consider that the source of sound power Π ( t ) is turned on at t = 0 , then the solution
to (7.1.4) is


Ε ( t=
)
Ac
(1 − e−t /τ E ) (7.1.5)

where τ E = 4V / Ac is the time constant of the room. Considering the interpretation of the time constant, if
the room volume is large and the sound energy absorption is small, it will take a long time for the sound
field to become sufficiently diffuse, and "steady-state" in a statistical sense. Taking the limit such that

113
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

t → ∞ , we have that the steady-state, diffuse field acoustic energy density in the room reaches
Ε ( ∞ ) = 4Π / Ac . Of course, these results indicate that when the room has no absorption of sound energy,
such that all surfaces are purely reflective A =0, the final energy in the room is infinite! This is the result
of the assumption that the air does not contribute dissipative effects. We will provide a correction to this
outcome in the Sec. 7.2.

7.2 Absorption of acoustic energy in a room

Then, we consider the case that the sound source has been turned on for a sufficient period of time that a
steady-state of diffuse field acoustic energy Ε0 is obtained. Suddenly, then the sound source is turned off.
The resulting solution to (7.1.4) is

Ε (t ) =
Ε0 e−t /τ E (7.2.1)

The reverberation time is defined to be the time that elapses after which the SPL in the room has dropped
by 60 [dB] from the original, steady-state, diffuse-field SPL. From the development above, this time [s] can
computed to be

0.161V
T60 = (7.2.2)
A

where the volume V is provided in [m3] and the total absorption A is given in units [m2]. (If you wonder
at the units obtained for a time (7.2.2), the value 0.161 is associated with a derivation of assuming acoustic
wave propagation in air, that incorporates the inverse sound speed units [s/m] [1]). In practice, the
reverberation time is said aloud using "tee-sixty". T60 is a critical factor of rooms to characterize as pertains
to speech intelligibility, annoyance, enjoyment of the environment, and so on.

When the total surface area of the room is given by S , the average Sabine absorptivity a is denoted by

a = A/ S (7.2.3)

The absorptivity is unit-less and is 0 ≤ a ≤ 1 for traditional, non-reactive surfaces. One prominent class of
reactive materials are acoustic metamaterials that use strategic internal composite material architectures to
result in unusual external macroscopic-level properties, such as a > 1 , typically induced by resonances or
propagating wave bandgaps [33].

The absorptivity is alternatively and commonly referred to as the absorption coefficient.

By substituting (7.2.3) into (7.2.2), we have

0.161V
T60 = (7.2.4)
Sa

The total absorptivity a of the room surfaces is computed from the individual absorptivities ai of the
individual surfaces Si

114
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1
a= ∑ Si ai
S i
(7.2.5)

Thus, in order to predict the reverberation time of a new room layout, design, and so forth, we must find
the absorptivities of each surface. There are several ways of experimentally identifying (or predicting) these
properties.

One way to measure the absorptivity of a surface treatment or object (typically of large surface area to
volume ratio), is to place it in a reverberation chamber of surface area S with a known reverberation time
T and previously determined absorptivity a0 , from (7.2.4). When the new sample surface treatment is
placed in the chamber, it covers an area of S s with an unknown absorptivity as . Ideally, this sample of
area S s replaces the same amount of surface area S s in the room of an area of local absorptivity a0 ;
violating this ideality could possibly re-shape the equal distribution of sound energy in the room and not
permit the following relation to hold. Examples of possible violations of the following results would be
inserting a sample surface treatment that had considerable volume or extended components into the room,
such as a wall treatment set in a deep frame or a floor furniture piece, such as a sofa.

Then, with the new sample in place within the reverberant room, the new reverberation time is computed
to be Ts . Based on the preceding analysis, it will result in a modified expression of the reverberation time

0.161V 0.161V (7.2.6)


=Ts =
( S − S s) 0
a + S a
s s Sa0 + S s ( as − a0 )

Using (7.2.4) with (7.2.6) gives that the absorptivity of the new sample is

0.161V  1 1 S T 
as =a0 +  −  =a0 + a0  − 1 (7.2.7)
Ss  Ts T  S s  Ts 

Occasionally, measurements by this approach are made difficult because it is not possible to exactly insert
the new sample over a surface area of the same span as that covered. This occurs when the sample has
notable dimensions in thickness different than the covered surface (such as a new upholstered chair in a
reverberant room). In such cases, a rotating diffuser may be used in the room during the measurements to
enhance the diffusivity of sound in the room. These acoustically-reflective objects are designed to quietly
rotate while measurements of reverberation time are taken. Considering the average value of all
reverberation time measurements acquired, such a testing scenario generates, in the mean, the diffuse sound
field desired according to the test scenario.

Thereafter, with knowledge of the absorptivity of the new sample, one may apply (7.2.4) towards the
development of a new room or space to predict the reverberation time, which is a critical factor of a room
to characterize as pertains to speech intelligibility, annoyance, enjoyment of the environment, and so on.
The Table 16 includes representative values of Sabine absorptivities and absorptions of common building
materials used in the fabrication of rooms. Historically, these data are consolidated into octave bands from

115
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

125 to 4000 [Hz], to account for the most important human hearing frequency range and acoustic contexts
(such as speech, music, machine sounds, and so on).

For completeness, we note that the absorptivity of an open area is a =1 at all frequencies. In other words, in
rooms that have always-open access to another room, such as a living room into a kitchen, the open area
between them provides for perfect absorption. Depending on the relatively scale of the volumes and on the
SPLs involved, one may need to actually consider the T60 of the combined room volume rather than of only
one room, particularly if the opening from one room to the next is a significant proportion of the shared
wall.
Table 16. Representative Sabine absorptivities and absorptions. Data given in octave band measures.

Frequency [Hz]
Description 125 250 500 1000 2000 4000
Sabine Absorptivity, a
Occupied audience, orchestra, chorus 0.40 0.55 0.80 0.95 0.90 0.85
Upholstered seats, cloth-covered, perforated bottoms 0.20 0.35 0.55 0.65 0.60 0.60
Upholstered seats, leather-covered 0.15 0.25 0.35 0.40 0.35 0.35
Carpet, heavy on undercarpet 0.08 0.25 0.55 0.70 0.70 0.75
(1.35 kg/m2 felt or foam rubber)
Carpet, heavy on concrete 0.02 0.06 0.14 0.35 0.60 0.65
Acoustic plaster (approximate) 0.07 0.17 0.40 0.55 0.65 0.65
Acoustic tile on rigid surface 0.10 0.25 0.55 0.65 0.65 0.60
Acoustic tile, suspended (false ceiling) 0.40 0.50 0.60 0.75 0.70 0.60
Curtains, 0.48 kg/m2 velour, draped to half area 0.07 0.30 0.50 0.75 0.70 0.60
Wooden platform with airspace 0.40 0.30 0.20 0.17 0.15 0.10
Wood paneling, 3/8-1/2 in. over 2-4 in. airspace 0.30 0.25 0.20 0.17 0.15 0.10
Plywood, 1/4 in. on studs, fiberglass backing 0.60 0.30 0.10 0.09 0.09 0.09
Wooden walls, 2 in. 0.14 0.10 0.07 0.05 0.05 0.05
Floor, wooden 0.15 0.11 0.10 0.07 0.06 0.07
Floor, linoleum, flexible tile, on concrete 0.02 0.03 0.03 0.03 0.03 0.02
Floor, linoleum, flexible tile, on subfloor 0.02 0.04 0.05 0.05 0.10 0.05
Floor, terrazzo 0.01 0.01 0.02 0.02 0.02 0.02
Concrete (poured, unpainted) 0.01 0.01 0.02 0.02 0.02 0.02
Gypsum, 1/2 in. on studs 0.30 0.10 0.05 0.04 0.07 0.09
Plaster, smooth on lath 0.14 0.10 0.06 0.04 0.04 0.03
Plaster, smooth on lath on studs 0.30 0.15 0.10 0.05 0.04 0.05
Plaster, 1 in. damped on concrete block, brick, lath 0.14 0.10 0.07 0.05 0.05 0.05
Glass, heavy plate 0.18 0.06 0.04 0.03 0.02 0.02
Glass, windowpane 0.35 0.25 0.18 0.12 0.07 0.04
Brick, unglazed, no paint 0.03 0.03 0.03 0.04 0.05 0.07
Brick, smooth plaster finish 0.01 0.02 0.02 0.03 0.04 0.05
Concrete block, no paint 0.35 0.45 0.30 0.30 0.40 0.25
Concrete block, painted 0.10 0.05 0.06 0.07 0.09 0.08
Concrete block, smooth plaster finish 0.12 0.09 0.07 0.05 0.05 0.04
Concrete block, slotted two-well 0.10 0.90 0.50 0.45 0.45 0.40
Perforated panel over isolation blanket, 10% open area 0.20 0.90 0.90 0.90 0.85 0.85
Fiberglass, 1 in. on rigid backing 0.08 0.25 0.45 0.75 0.75 0.65
Fiberglass, 2 in. on rigid backing 0.21 0.50 0.75 0.90 0.85 0.80
Fiberglass, 2 in. on rigid backing, 1 in. airspace 0.35 0.65 0.80 0.90 0.85 0.80
Fiberglass, 4 in. on rigid backing 0.45 0.90 0.95 1.00 0.95 0.85

116
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Sound Absorption A in m2
Single person or heavily upholstered seat (±0.10 m )
2
0.40 0.70 0.85 0.95 0.90 0.80
Wooden chair, table, furnishing, for one person 0.02 0.03 0.05 0.08 0.08 0.05

Example: Determine the reverberation time in the 500 [Hz] octave band for a room illustrated according
to Figure 59 assuming that the room is fully enclosed by the surfaces shown. All unshaded areas, including
the walls in the "foreground" are considered to be plaster on studs (aka drywall).

Figure 59. Room surfaces to consider in the computation of the reverberation time.

Answer: The room volume is V =52.5 [m3]. The surface areas and absorptivities of each portion of the
room in the 500 [Hz] octave band are found to be

Si [m2] ai [dimensionless] at 500 [Hz]

drywall 65 0.10

window pane 2 0.18

carpet on undercarpet 10 0.55

wooden floor 7.5 0.10

glass heavy plate 1.5 0.04

The total room absorption is Sa and is found from (241) to be Sa =13.17 [m2]. Consequently, from (7.2.4),
the reverberation time in the 500 [Hz] octave band is T60 =0.642 [s].

7.2.1 Diffuse field sound pressure level


The instantaneous energy density of acoustic waves in the far field, whether from spherical spreading or
considering plane waves, is

1  2  p  
2

=Εi ρ 0 u +    (7.2.1.1)
2   ρ0 c  
 
In the far field, the relation between acoustic pressure and particle velocity is

117
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

p
z ==± ρ0 c =
z (7.2.1.2)
u

Substituting (7.2.1.2) into (7.2.1.1) leads to the result that

p2 (7.2.1.3)
Εi =
ρ0 c 2

Assuming harmonic waves, taking the time-average of (7.2.1.3) leads to the average energy density
T
1 P2
T ∫0
=
Ε Ε i d
=t (7.2.1.4)
2 ρ0 c 2

where P is the amplitude of the acoustic pressure, which corresponds to P = 2 prms . This is the same
average energy density Ε as that governed by the (7.1.4) and determined by (7.1.5). Consequently, the SPL
of the reverberant field is computed to be

ρ0 c 2 Ε ( t )
SPL = 10log10 2
(7.2.1.5)
pref

Often, the steady-state SPL is of interest, so that by substitution of the result following (7.1.5) we have
4 ρ 0 cΠ (7.2.1.6)
SPL ( ∞ ) =10 log10 2
Apref

7.2.2 Dissipation by fluid losses


Of course, these developments neglect the dissipation of sound energy by the air/atmosphere, because like
the energy density in (7.1.5), an unboundedly large SPL is produced in a purely reverberant room according
to (7.2.1.6) for a source of sound even with the smallest sound power.

Thus, to take into account the losses in air, we amend (7.2.1) to be

Ε (t ) =
Ε0 e−( A/4V + m )t (7.2.2.1)

where the term m is given by


1.7
 50  f 
−4
m 5.5 × 10  
=  (7.2.2.2)
 h  1000 

where, as in Sec. 5.7.1, h is the relative humidity and f is the frequency in [Hz]. The units of m are
[1/m], and it represents a spatial absorption of acoustic energy due to the volume in which the diffuse sound
field exists. Consequently, a re-derivation results in the reverberation time of

0.161V
T= (7.2.2.3)
Sa + 4mV

118
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

The T60 time (7.2.2.3) corrects (7.2.4) for the loss of acoustic power due to dissipation effects of the air
within the enclosure.

7.3 Contribution of acoustic energy from the direct and reverberant acoustic fields

As illustrated in Figure 57, the total sound field provided by an acoustic source in a room is the combination
of direct and reverberant components. The characteristics of the reverberant contribution to the sound field
have been described above. The characteristics of direct sound fields have been detailed in Sec. 5 in a
general context. The previously derived results in Sec. 5 will now be brought into the current context of
room acoustics.

Consider that the steady-state average energy density in the room Ε s is the combination of direct Ε d and
reverberant Ε r energy densities

Εs = Εd ( r ) + Εr (7.3.1)

where it is explicitly denoted that the direct component is a function of the receiver distance to the source
r. Now, the steady-state energy delivered by the noise source, of sound power Π , to the reverberant field
is (1 − a ) Π . In other words, it is the sound power remaining after the first reflection that is partially
absorbed by the surface area. Of course, this must be the same as the energy absorbed by the room

A Sa
(1 − a ) Π= c Ε=
r c Εr (7.3.2)
4 4

where use is made of (7.1.5). Consequently, we find that

4 (1 − a ) Π 4Π
=Εr = (7.3.3)
cSa cR

=
where R Sa / (1 − a ) is called the room constant. The room constant has the limiting case that when the
average absorptivity is small a << 1 , one has R ≈ Sa . Given that (7.3.3) includes only constants, it is
apparent that the energy density of the reverberant field is constant, and is a function of the room and source
characteristics. These relations can also be expressed in terms of the diffuse field intensity and RMS
pressure via Ir =
Π/R=2
prms ,r / 4 ρ0 c
.

The contribution to the total steady-state sound field from the direct transmission of acoustic waves is
determined according to the spatially-varying RMS sound pressure.
2
2 ρ0 cΠ prms ,d Π
p=
rms , d →
= Ε d = (7.3.4)
4π r 2
ρ0 c 2
4π r 2 c

where use is made of (4.6.1.4) and (7.2.1.4). Consequently, the steady-state acoustic energy density at a
position in the room r distant from the source center is

119
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Π 4Π
=
Εs + (7.3.5)
4π r c cR
2

By virtue of (7.2.1.4), the RMS acoustic pressure at a location r from the source is

ρ0 cΠ 4 ρ0 cΠ
2
prms 2
= prms 2
, d + prms , r = + (7.3.6)
4π r 2 R
2
As before, the total SPL is SPL = 10log10 prms (2
/ pref ) which can be expressed using
 ρ cΠ  1 4   1 4  pref 2

SPL =10log10  0 2  +  = LΠ + 10log10  +  − 10log10   (7.3.7)
 pref  4π r  4π r  ρ 0 cΠ ref 
2 2
R   R

Note that some acoustics texts and references drop the final term in (7.3.7) since it is a constant of around
-0.3 [dB] which is insignificant compared to most sound power levels or the second term in (7.3.7) which
are often on the order of 10s of [dB].

Considering the components of (7.3.5) and taking the ratio of the reverberant to direct sound field acoustic
energy densities, we have that

Ε r / Ε d =( r / rd )
2
(7.3.8)

where the rd is the distance at which the direct sound field energy density has diminished so as to be the
same as the energy density of the reverberant field

1
rd = R /π (7.3.9)
4

In other words, when r << rd the sound field is dominated by the direct component because the receiver is
near to the source. Thus, treating the surfaces of the rooms in an attempt to diminish the total SPL via the
reverberant component will have insignificant effect for the sound received at locations satisfying r << rd .
In contrast, r >> rd indicates that one has great opportunity to tailor the sound energy received via different
room treatments, materials, and so on because the direct field is insignificant. Note that Ref. [1] indicates

rd = A / π / 4 in Sec. 12.7 which is only true when a << 1 .

7.4 Sound transmission through partitions

For occupants in a room, one may wish to have sound absorption properties for the walls that minimize
adverse reflections from an incident sound field. This implies that sound energy is "lost" in the walls. On
the other hand, practical structures have flexible walls, which are called partitions, that partially dissipate
and partially transmit incident acoustic energy, Figure 60. For the occupant in the room with the developed
sound field with flexible walls, these two mechanisms induced in the course of sound transmission are not
distinct. Yet, for the occupant in the adjacent room with shared walls to the prior room, the two mechanisms

120
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

are very distinct: the dissipated acoustic energy through the partition is not heard, but the transmitted sound
is!

Figure 60. What happens to sound waves that are incident on a partition.

Unlike fluids, solid materials support shear and compression waves, Figure 61. The prevalence of these two
phenomena in determining the sound reduction that occurs as pressure waves pass through partitions
depends on the frequency range of interest with respect to the partition material and geometric composition.

Figure 61. Distinction between longitudinal and shear waves in solids.

Consequently, due to such propagation of energy through solid and structural materials, it is of great
importance to characterize the amount of sound energy that transmits through such systems.

The sound transmission coefficient τ is defined as the ratio of transmitted acoustic power Π t to incident
acoustic power Π i through a partition (or layer, more generally):

τ=Πt / Πi (7.4.1)

By conservation of energy, the coefficient is 0 ≤ τ ≤ 1 , where the lower limit τ =0 indicates the partition is
genuinely rigid and in the upper limit τ =1 the partition does not exist! (or is any open area between the
rooms).

The sound transmission loss, TL , is computed from

121
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

1
TL = 10log10 = −10log10 τ (7.4.2)
τ

and has units [dB]. Large TL indicates there is large reduction of sound as it passes through a partition.
Note that when the areas accounting for a source room and receiving room are the same at the shared
partition, we may express the transmission coefficient (7.4.1) as τ = I t / I i where I t is the transmitted
diffuse field acoustic intensity and I i is the incident diffuse field acoustic intensity.

Figure 62. Sound transmission determine by SPL measurements.

For instance, consider the arrangement shown in Figure 62. The shared wall area is S w . The acoustic
intensity in the source room is I i and the intensity in the receiving room is I t . We previously observed that
the diffuse field acoustic intensity striking a wall is (7.1.3) one-quarter of that which strikes the wall via
incident plane waves. Consequently, the sound power incident on the shared wall from the source room is
2
prms ,i
Π=i I i S=
w Sw (7.4.3)
4 ρ0 c

where prms ,i is the RMS acoustic pressure in the source room. Considering the receiving room, in the steady-
state of a fully developed diffuse field, whatever sound power transmits through the shared wall must be
absorbed in the receiving room:

Sr ar
Π t =I t (7.4.4)
1 − ar

where S r is the total surface area of the receiving room and ar is the receiving room average absorptivity.
Likewise, assuming that the sound field in the receiving room is only diffuse (there are no sources), we
have that

p2 S a
Π t = rms ,r r r (7.4.5)
4 ρ0 c 1 − ar

Using the definition of the sound transmission coefficient, we have that


2
Πt prms , r S r ar
τ
= = (7.4.6)
Π i prms ,i S w (1 − ar )
2

122
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Employing the definition of TL we rearrange terms to find

S w (1 − ar )
TL = SPL1 − SPL2 + 10log10 (7.4.7)
Sr ar

where SPL1 is the SPL in the source room and SPL2 is the SPL in the receiving room, see Figure 62. Note
that some acoustics references drop the (1 − ar ) in (7.4.7) when the receiving room is strongly reflective,
such as a reverberant chamber in an experimental measurement environment where ar << 1 . For flexible
partitions such as those described in Sec. 7.5, (7.4.7) is referred to as the field transmission loss TL f .

The quantity SPL1 − SPL2 is called the noise reduction

NR SPL1 − SPL2
= (7.4.8)

Let's assemble the concepts from Sec. 7.3 to now. The SPL in the source room is (7.3.7)

 1 4  pref 2

SPL1 = LΠ + 10log10  + − 10log   (7.4.9)
R 
10
 4π r  ρ 0 cΠ ref
2


where R is the room constant for the source room. We then neglect the SPL contribution from the direct
field in the source room, which is viable if the source is not too near to the shared wall or if the source room
is not strongly absorptive. Then we substitute the remainder of (7.4.9) into (7.4.7) to yield

4 S (1 − ar )
SPL2 =
LΠ + 10log10 + 10log10 w − TL (7.4.10)
R Sr ar

 2
pref 
The constant 10log10   ≈ 0.14 [dB] has been dropped in (7.4.10) due to its negligible contribution.
 ρ 0 cΠ ref 

Putting (7.4.10) into context, consider that you are in the receiving room where the SPL2 occurs and you
have no control over the sound power of the source in the source room. We can decrease the SPL2 in the
receiving room by (a) increasing the TL of the shared wall (on your side of the shared wall, you can add
mass or other sound-blocking/absorptive material), (b) decrease the shared wall area S w (difficult?
impossible?), (c) increase the absorptivity of either the source of receiving room (easy for your receiving
room, difficult to address in the source room if it is a neighbor's apartment). Note that increasing the
absorptivity of the receiving room surfaces too greatly will result in a strong direct field propagated from
the shared wall. That could become as much of a nuisance than a moderate diffuse field transmission. Based
on the above options, placing sound-blocking and/or absorptive materials on the shared wall is a more
effective approach to reduce the SPL2 in the receiving room.

123
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

When a shared wall surface is composed of several partitions, Figure 63, we consider the composite
transmission coefficient and corresponding composite TL . In a given octave band, the coefficient is
computed from

∑Sτ i i
τ= i
(7.4.11)
∑S i
i

In a similar way as with open spaces having "perfect" absorptivity a =1, the transmission coefficient of an
opening between two rooms is τ =1.

Figure 63. Composite shared wall.

Transmission loss of a given building material is typically measured via the noise reduction obtained
between a source and receiving room, where the shared wall interface is strictly the building material under
consideration. Typically the source room is reflective. The receiving room is either reflective and one
measures the diffuse field sound pressure level of the receiving room using the relations of (7.4.7) and
(7.4.8). Alternatively, the receiving room is anechoic and one measures the sound pressure level over a
hemisphere that encloses the shared wall, thereafter using the relations (7.4.6) and (5.6.5) where intensity
of the radiated sound power is determined.

Example: Consider a wood door, spanning 7 [ft] by 3 [ft], that has a crack between its opening at the floor
of 1/2 [in]. Compute the TL in the 500 [Hz] octave band for this composite partition. The TL of the door
itself is 29 [dB] in the 500 [Hz] octave band.

Answer: A crack/opening has a transmission coefficient of 1 since all acoustic energy passes through. For
the door, the transmission coefficient is =
τ door 10
= −29/10
0.001259 . Using (7.4.11), we have that

τ
7 ⋅ 3) 0.001259 + ( 3 / 24 )1
(= 0.007169
( 7 + 1 / 24 ) 3
Consequently, the TL is 21.4 [dB]. The 1/2 [in] crack reduces the TL by >7.5 [dB]. Recall that 6 [dB]
change in SPL is easily recognized in human subjective testing, Sec. 6.4. This should serve as a striking
example that if you want to effectively reduce sound transmission between two areas/rooms, you must

124
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

entirely seal cracks and direct passageways through which acoustic waves otherwise pass through with
ease.

7.4.1 Practical material compositions for sound absorption and blocking


Most materials used in rooms are either good at absorbing sound energy or effective at inhibiting sound
passage. For instance, in the 1000 [Hz] octave band, painted concrete cinder block is not absorptive ( a
=0.07) while the TL is approximately 50 [dB], almost a genuine, rigid barrier. Conversely, a two inch-thick
fiberglass batting has an absorptivity of a =0.90 but a TL of approximately 0.5 [dB], acting like a mostly
absorptive material [13]. Consequently, there is a trade-off between absorptivity and transmission loss for
many bulk or structural materials.

The solution to achieve high absorptivity and transmission loss is to laminate (also, layer) absorbing and
barrier materials together, in other words to mount the materials in succession. This is standard practice in
building construction and it often has the additional purpose of enhancing thermal insulation properties by
increasing the number of thermal impedance mismatches between the environments on either side of the
laminate material, Figure 64. As it pertains to fabrication of flooring, a concrete-based building construction
with carpeted floors results in strongly local sound fields within each room that are tailored by the
absorptivity of the carpet. Also, curtains or other draperies over shared walls increases transmission loss
above the capabilities of the walls themselves, while also increasing room absorptivity. In vehicle cabins,
the interior trim lining (such as above your head) minimizes reflections in the vehicle interior while the
steel vehicle body inhibits transmission of sound from outside. Thus, a suitable combination of absorbing
and barrier materials can provide the best of both sound absorption and transmission properties.

Figure 64. Layered residential building wall. https://buildingscience.com/documents/insights/bsi-001-the-perfect-wall

125
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

7.5 Sound transmission through flexible partitions, panels

The routines described in Sec. 7.4 to characterize the TL of a partition are experimental. Such routines
require measurements of representative partition elements in order to compute the transmission coefficients
τ in relevant octave bands. Latter assessments then use the tabulated data to determine acoustic fields in
source and/or receiving rooms.

In the context of the current study, structures are set to vibrate by incident acoustic pressure waves. The
term structure here denotes that distinct properties are realized by virtue of the combination of system
geometry and use of solid materials which support longitudinal and transverse waves, Figure 61. It is this
structural vibration that leads to the transmission of sound between a source and receiving room. Of course,
a perfectly rigid partition transmits no sound power. Thus, incident sound energy from the source causes
the partition to vibrate in the source room, this vibration continues in the structure whether as longitudinal
or shear waves, and then couples with the acoustic fluid in the receiving room to radiate sound from the
partition surface. In general, this sequential interaction describes the field of structural acoustics. But here,
we focus our investigation on the dependencies that determine the TL through partitions that separate
rooms.

For flexible partitions, oftentimes termed panels, the TL may be explicitly characterized for specific
frequency regimes. Based on the types of vibration involved, an important phenomenon occurs when the
wavelength of the acoustic pressure waves corresponds to the wavelength of bending vibrations. We first
introduce an important sensitivity exhibited by isotropic panels with infinite span-wise dimensions. The
bending wave speed is
1/ 4
 Dω 2 
cb =   (7.5.1)
 m 

where D is the bending stiffness [kg.m2.s-2], m is the surface density [kg/m2] (indicating that it is the
volumetric density ρ multiplied by the thickness h , m = ρ h ), and the ω is the angular frequency [rad/s].
The bending stiffness is computed from

YI ′
D= (7.5.2)
1 −ν 2

where Y is the isotropic material Young's modulus [Pa], I ′ = h3 / 12 is the cross-sectional second moment
of area per unit width [m3], and ν is the Poisson's ratio which denotes the relation between lateral
contraction and longitudinal stretch in the deformation of solids [dimensionless].

Recalling the expression for the wavelength in terms of frequency [Hz] and wave speed [m/s], we write
λb = cb / f . The wavelength of the acoustic fluid with sound speed c is λ = c / f . Consequently, when
these wavelengths are the same, we find

126
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

c2 m (7.5.3)
fc =
2π D

which is called the critical frequency, f c . The critical frequency is the frequency [Hz] at which the
wavelengths in the fluid and of the structural vibrations are the same, giving rise to efficient transfer of
energy between the fluid and structure.

For completeness, the assumption of infinite span-wise dimensions can be reasonably well satisfied if the
span-wise dimensions are >> λb . Thus, the theory described above is relevant for finite panels, so long as
one considers regions of the panel sufficiently far from edges in terms of the wavelength λb .

In general, the TL of a panel is observed to exhibit the trends illustrated in Figure 65.

Figure 65. Typical transmission loss profile of a flexible partition.

For panels, normal, oblique, and field incidence transmission loss are distinct. Field incidence transmission
loss is a function of finite panel geometry and is often reported as the measured value of transmission loss
for panels, as measured is diffuse fields, see (7.4.7) and related discussion. Normal incidence and oblique
incidence transmission loss refer to the angular directions at which incident waves impinge upon the panel:
normal incidence indicates that the wave propagation direction is normal to the plane of the panel and
oblique incidence is all other angular impingement cases. As described previously, the assumption of plane
waves can be accurately adopted if the location of consideration is in the far field of the source.

The transmission coefficient for a wave incident upon a panel surface is a function of the bending wave
impedance Z . For a lightly damped, isotropic panel of infinite span-wise extent, the bending impedance
is

  f 2 
Z = j 2π fm 1 −   (1 + jη ) sin 4 θ  (7.5.4)
  f c  

127
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

where f is the frequency in [Hz], f c is the critical frequency as defined (7.5.3), m is the surface density,

j= −1 , η is the panel loss factor [dimensionless] that characterizes the significance of the material
damping, and θ is the angle from normal to the panel plane at which the wave impinges on the panel [rad].
Consequently, the transmission coefficient for the panel is
2
ρ0 c
2 −2
Z Z cos θ
τ (θ )= = 1+ (7.5.5)
ρ0 c 2 ρ0 c
2 + cos θ
Z

Note that ρ 0 is the volumetric density of the fluid and c is the sound speed in the fluid. The inverse
impedance is referred to as the mobility in structural dynamics. Of course, in general, we have via (7.4.2)
the TL = −10 log10 τ .

The field incidence transmission loss is

S w (1 − ar )
TL=
f NR + 10log10 (7.5.6)
Sr ar

where S w is the shared wall area and S r and ar are the receiving room area and absorptivity, respectively.
As reminder, unlike normal and oblique incidence transmission loss, the field incidence TL f is a measured
(empirical) value. Recall also that the inherent assumption for (7.5.6) is that the acoustic field is diffuse so
that TL f is sometimes termed the diffuse field transmission loss.

Considering limiting cases, from (7.5.4) and (7.5.5) we compute the normal incidence TLn , θ =0° Figure
66, (in the special case of small structural damping)

  ω m 2 
=TLn 10log10 1 +    (7.5.7)
  2 ρ0 c  

The normal incidence transmission loss defined by (7.5.7) is termed the mass law since the panel mass m
(also density ρ ) is the single determinant for tailoring the significance of the transmission loss TLn .

The field incidence TL f is approximated by the normal incidence TLn using

TL f ≈ TLn − 5.5 (7.5.8)

The relationship (7.5.8) was found experimentally [13].

In this course, the term "transmissions loss" refers to (7.4.2), TL , which can be determined as normal or
oblique incidence values using (7.5.5) and (7.4.2). Field incidence TL f is an empirical metric.

128
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

7.5.1 Influence of mass on transmission loss


Considering the mass law prescribed by (7.5.7) for the normal incidence transmission loss TLn and
recalling that the panel mass is m = ρ h , we see that the TLn increases as the thickness h increases.
Correspondingly, the critical frequency f c reduces, (7.5.3).

Ordinarily, one wishes to use panels as barriers in the mass-controlled region, illustrated in Figure 65.
According to the mass law (7.5.7):

• The TL changes at a rate of 6 [dB] per octave change in frequency


• The TL uniformly changes by +(-) 6 [dB] when the mass of the panel is changed by a factor of two
(one-half)

Thus, to block sounds from transmitting through partitions, the best method is to make the panel as dense
and heavy and possible. Unfortunately, such an approach tends to be costly or have adverse side effects
from the perspectives of building finance and civil engineering.

7.5.2 Coincidence
At frequencies greater than the critical frequency f c , there is an angle of oblique incidence θ at which the
impinging wavefront wavelength λ corresponds to the projection of the bending wavelength λb via
λb sin θ = λ , see Figure 66. In such cases, the panel is excited to vibrate and transmit bending waves. This
is the coincidence phenomenon. As a result, we find the corresponding coincidence frequency f b using

c2 m (7.5.2.1)
fb =
2π sin 2 θ D

We see that the critical frequency f c (7.5.3) is the lowest possible frequency of coincidence frequency f b
. Thus, the critical frequency corresponds to the longest wavelength at which the coincidence phenomenon
occurs and it occurs at grazing incidence θ =90°. If coincidence is to occur at higher frequencies, the sound
must impinge upon the panel at oblique angles determined according to (7.5.2.1).

129
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 66. Geometric relations governing angle of incidence to realize coincidence.

At frequencies greater than the coincidence frequency, the TL changes at a rate of 9 [dB] per octave change
in frequency. This region of the TL for panels is referred to as the damping-controlled region. Although
the damping-controlled region exhibits greater transmission loss increase in [dB] per octave than the mass-
controlled region, Figure 65, it is generally found that the frequencies within the damping-controlled
bandwidth are significantly above the important bandwidth for applications, which tends to be from about
125 to 4000 [Hz] since it is associated with human communication and day-to-day sounds, e.g. music,
beacons, alerts, etc.

The lower frequency at which the mass law (7.5.7) holds is the fundamental frequency of the panel which
is computed from

π D −2
=fm
2 m
( a + b−2 ) (7.5.2.2)

where a and b are the panel span-wise dimensions [m]. The fundamental frequency refers to the panel
vibrating in its lowest order mode, which is dependent on the boundary conditions in addition to the panel
composition itself [1].

In one example, Figure 65 illustrates the full trend of TL for conventional panels showing the influence
upon TL at frequencies near, below, and above the fundamental and coincidence frequencies. The example
below shows a specific case study for a common building construction panel fabrication.

As stated before, these dimensions should be >> λb for the preceding theory to be applicable. For many
architectural acoustics applications, this condition is met.

130
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Note: While the above derivations and references make distinctions among field, normal, and oblique
transmission loss measures according to different mathematical notations TLi , one may discover in practice
a loose use of the various mathematical notations denoting each type as TL while referring to it by name.
For instance, one may find references that discuss the "oblique incidence transmission loss TL ", or "normal
incidence transmission loss TL ", and so on. In general, tabulated values are reported as field incidence TL f

, such as in Table 24, since field incidence TL results are readily measured. Nevertheless, there are
straightforward conversions among each of the metrics as described above to ideally alleviate possibility
for confusion.

Example: Determine and plot the TL for a drywall panel with m =32.5 [kg/m2], ρ =650 [kg/m3], Y = 2
[GPa], with span-wise dimensions a =3 [m] and b =4 [m]. The loss factor of the panel is η =0.001, the
Poisson's ratio is ν =0.2, and the sound source induces far field wavefronts at an angle of incidence θ =[35
55 75]° to the panel. The plot should consider frequencies from the lower frequency limit of the applicability
of the transmission loss prediction to a frequency that is ten times the coincidence frequency. Is the
minimum TL associated with the coincidence phenomenon increased or decreased by increase in the angle
of oblique incidence θ ?

Answer: The lower limit at which the predictions are reasonably viable is f m ~7 [Hz]. The critical frequency
is f c =725 [Hz] while the coincidence frequency is 1.079 [kHz]. A plot of the TL is shown below. Based
on (276), when the angle of incidence θ increases, the minimum TL decreases. Indeed, the overall TL
levels decrease as θ increases.
100

theta=35 degrees
transmission loss, TL [dB]

80
theta=55 degrees
theta=75 degrees
60

40

20

0
1 2 3
10 10 10
frequency [Hz]

Y=2e9; % [Pa] young's modulus


nu=0.2; % [dim] poisson's ratio
m=32.5; % [kg/m^2] area density
rho=650; % [kg/m^3] volumetric density
h=m/rho; % [m] thickness
Ip=h^3/12; % [m^3] cross sectional second moment of area per unit width
D=Y*Ip/(1-nu^2); % [kg.m^2.s^-2] bending stiffness
rho_0=1.12; % [kg/m^3] air density
c=343; % [m/s] sound speed in air
f_c=c^2/2/pi*sqrt(m/D); % [Hz] critical frequency
a=3; %[m] dimension "height"

131
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

b=4; %[m] dimension "width"


f_m=pi/2*sqrt(D/m)*(a^-2+b^-2); % [Hz] fundamental frequency
eta=1e-3; % [dim] loss factor
theta=[35 55 75]*pi/180; % [rad] angle of incidence
f=logspace(log10(f_m),log10(10*f_c),5001); % [Hz] frequency of acoustic wave
for iii=1:length(theta)
Z(:,iii)=j*2*pi*f*m.*(1-(f/f_c).^2.*(1+j*eta)*sin(theta(iii))^4); % [kg/m/s] bending wave impedance
tau(:,iii)=abs(2*rho_0*c*Z(:,iii).^(-1)./(2*rho_0*c*Z(:,iii).^(-1)+cos(theta(iii)))).^2; % [dim] transmission coefficient
TL(:,iii)=10*log10(1./tau(:,iii)); % [dB] transmission loss
end
figure(1);
clf;
semilogx(f,TL);
xlabel('frequency [Hz]')
ylabel('transmission loss, TL [dB]');
xlim([min(f) max(f)]);
ylim([0 100]);
legend('theta=35 degrees','theta=55 degrees','theta=75 degrees','location','best');

7.6 Sound transmission class, STC

Sound transmission class, STC, is a single-number metric used to compare the experimentally measured
TL performance of a partition with respect to previously-established TL values that have close
correspondence to subjective evaluation of noise isolation from one room to another. The STC is
determined using procedures outlined in the standard ASTM E413. The STC is applicable to sound
transmission through partitions used between occupied rooms.

The STC metric is not applicable to rooms that are adjacent to the outdoors, adjacent to rooms with
significant noise from machines and heavy equipment, and/or adjacent to rooms with considerable
impulsive noise sources (the ASTM indicates bowling alleys are one such example). The STC metric
assumes that sound transmission through the shared wall is the most important path of sound transmission
from a source to receiver room. In other words, no "weaker links" may be found in the sound transmission
path that would negate the close attention to the TL of the shared wall. If such weaker links occur, the STC
metric will not be representative of actual transmission loss results.

To compute STC for a partition, the following steps are undertaken:

1) The TL for the partition is measured and post-processed into one-third octave band values of [dB] from
125 to 4000 [Hz]
2) Deviations are computed by the difference between the ASTM STC curves (ASTM E413) and the
measured TL . In other words, Deviations = STC - TL [dB]. Only consider the positive deviation values
determined from this difference.
3) The highest valued STC rating that meets the compliance metrics (a) and (b) is identified
a) There can be no individual deviation value that exceeds 8 [dB]
b) The sum of all deviations for a given rating may not exceed 32 [dB]
4) The STC rating for the partition is then identified by selecting the TL value of the STC curve that
occurs at 500 [Hz], Figure 67

132
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 67. Determining STC by the 500 [Hz] TL measure of the nearest-match STC curve and the experimental TL
measurements. Here, the nearest match between the data of TL (dots) and the STC curve is shown. Then we look at the 500
[Hz] value of the STC curve. Here, that value is 24 [dB]. As a result, the STC rating for this partition is STC-24. Image
source ASTM E413-16.

The STC values correspond to the subjective measures of sound isolation from one room to another in the
ways described in Table 17.
Table 17. Characteristics of sound isolation performance given for STC

STC Quality of sound isolation by partition

25 Normal speech understood easily and distinctly through wall

30 Loud speech understood well, normal speech heard but not intelligible

35 Loud speech heard but not intelligible

40 Onset of privacy

42 Loud speech barely audible

45 Loud speech not audible

50 Very loud sounds can be faintly heard

133
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

7.6.1 Methods to enhance STC

7.6.1.1 Mass
In agreement with the mass law (7.5.7), the heavier the partition the greater its TL will be in the frequency
bandwidth of primary relevance towards determining the STC rating. There is simply no substitute in the
pursuit of increasing the sound insulation properties between room than the success of enhancing TL via
using heavy partitions. But, you will pay for it in up front materials and labor expenses. On the other hand,
the advantages come about in long-term satisfaction of the acoustical seal provided between rooms.

7.6.1.2 Cavity depth


Deeper recesses between rooms enhance the sound insulation capabilities of a partition via losses in air,
although this effect becomes more pronounced when filling the larger cavity with a greater amount of sound
absorbing materials (see below).

7.6.1.3 Structural redundancy and asymmetry


Most buildings are "stud-framed" constructions, whether the studs are metal or wood, over which "frames"
are placed, which are frequently the layers of drywall and/or board that then interface with the rooms (often
after a layer or paint or wallpaper). Generally, the studs from one room that hold drywall or boards are used
in the adjacent room for the same purpose. Yet, by having different studs hold the drywall for a source room
than the studs which are used to hold the drywall in the receiving room, the STC will be increased, while
the TL will be particularly increased at higher frequencies. In addition, using studs positioned in staggered
distances one-from-another, rather than studs with perfectly periodic spacing, will generally improve the
STC rating of a partition, all other characteristics being equal.

Figure 68. At left, staggered studs that are not shared between one room to the other. Sound absorbing fiberglass
battings fill the cavities, prior to being covered over with drywall or boards. At right, polystyrene and fiberglass
insulations are both used for thermal and acoustic purposes and are then covered over with drywall. Note that the
studs do not extend to the back concrete wall, which enhances TL. Image sources
http://images.meredith.com/diy/images/2008/12/p SCTC 100 04.jpg, http://www.qisiq.com/

134
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

7.6.1.4 Compliant connections


Connecting the drywall or boards to the studs in ways that enhance compliance provides a significant
improvement in the sound insulation capabilities of the resulting partition. This can be achieved by wider
spacing of studs or by mounting a thin layer elastomeric or fiber-based insulation in between the
drywall/board and the stud.

7.6.1.5 Filling of cavities between studs with sound absorbing materials


Filling the cavities in between studs with sound absorbing materials will, in general, uniformly enhance the
one-third octave band TL measures at all frequencies. This method is the common practice in order to meet
standards of thermal insulation, but it is a concurrent way to enhance the STC rating. In general, fiberglass
battings are used, such as the standard Owens-Corning fiberglass products readily seen in home
improvement supplies stores. For doors, thin walls, and garages, foam-based materials may also be used
such as polystyrene foam sheets, which are also found at home improvement supplies stores.

7.7 Impact insulation class, IIC

Impact insulation class, IIC, is a single-number metric to classify the success of floor assemblies and
building construction methods to suppress the transmission of impact sounds to ceilings below. The
common sounds of relevance are footsteps. The IIC is computed according to procedures outlined in the
standard ASTM E492. Importantly, this procedure assumes that the floor-ceiling path of sound transmission
is the most important between the rooms that are vertically adjacent.

The procedure to determine the IIC involves the use of a tapping machine, Figure 69. The machine is similar
to the crankshaft and piston assembly found in internal combustion engines, but here the pistons punch the
floor in a reciprocating order. The tapping machine is strategically placed over the floor in the source room
while microphones in the receiving room measure diffuse field sound levels. Since impacts are transient
events, the ASTM E492 dictates special means by which the transient pressure measurements are converted
to impact-relevant SPL measures. The IIC value is then determined using a comparison strategy similar to
that outlined for STC in Sec. 7.6.

Unlike STC which characterizes diffuse field-type of sounds as transmitted from one room to another
through a partition, impact noise is a strongly subjective evaluation and some people are far more sensitive
to impact sounds than other people. Thus, the IIC rating may not be as meaningful to one individual as to
another.

Typically, a floor with no particular impact insulating efforts taken during the design and construction
phases will result in an IIC of around IIC 30. This is due to the simple fact that floors need to be heavy in
order to support loads and mass typically provides a useful barrier to stop impact noise transmission. On
the other hand, IIC 70 would be a target sought in high-end building construction for adjacent apartments,
condos, or hotel rooms. The range of IIC around IIC 50 is where subjective evaluation of the noises varies
greatly among individuals, where some people indicate the impact noise of footsteps is intolerable while
others do not notice the punctuated sounds.

135
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 69. Tapping machine used for IIC evaluation. Image source https://www.sp.se

7.7.1 Methods to enhance IIC

7.7.1.1 Planning the usage of spaces in advance of occupation


The consideration of how a space will be used is a prime way to avoid remedial efforts taken to address
impact noise concerns in the future. For instance, it is unwise to design a building with a hardwood floor
hallway that passes over bedroom space in the floor below. Otherwise, the IIC between the hallway and the
bedroom will likely need to be very large, and very costly to realize, in order to achieve acceptable sound
levels in the receiving room below the hallway. As acoustical consultants, the careful attention to the
planned use of space is the most effective focus to provide in early stages of structural design and
development and can greatly alleviate future causes for concern. Note that this method does not directly
enhance IIC but it eliminates reasons that could later motivate high IIC ratings for floor and building
constructions.

7.7.1.2 Mass
It does not matter whether the energy is concentrated in short durations of time or is induced under diffuse
field conditions: either way, increasing the mass of a partition between two spaces enhances the TL in
important frequency bandwidths and will correspondingly improve the floor IIC rating. On the other hand,
very low frequency "thud" sounds may not be as well suppressed by mass alone since these low frequency
sounds may be closer to the floor panel resonant frequency regime where TL values drop. Thus, the increase
of surface density between a floor and an adjacent ceiling is not the uniformly-effective solution to increase
IIC ratings as it is a means to increase STC ratings, particularly when the impact noise is due to a "heavy-
footed" individual in the upper floor who walks around causing low frequency thud sounds in the floor
below.

136
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

7.7.1.3 Damping
Impact energy is best tackled before it diffuses into structures, transports as waves, and radiates as sound
at other locations. To this end, increased damping at the points of impact is an established means of
improving IIC ratings in architectural acoustics [34]. This includes using carpet with undercarpet, providing
more resilient connections between the floor panels and joists in the upper level room, and providing more
resilient connections between the joists and drywall ceiling for the lower level room. Such resilient
connections are typically a thick underlayment material such as a fiber-based batting that becomes
compressed between the structural members that are attached together.

7.7.1.4 Structural redundancy


Staggered studs that uniquely support the floor of the upper level room and the ceiling of the lower level
room are a strategic way to improve impact noise insulation characteristics of the construction. This strategy
is exemplified in Figure 70 at right. Since there is no direct path of impulsive energy transfer from the upper
to lower floors, such as by the wooden joists, this design strategy greatly increases IIC.

Figure 70. At left, fiberglass insulation installed to fill space in between floor of upper level and ceiling of lower level
to enhance IIC. At right, damping layers below the upper level floor reduce impact noise, while insulation and
staggered studs increase IIC in the cavity between vertically adjacent rooms. Image sources http://energy.gov/

7.7.1.5 Sound absorbing materials in the cavity from floor to adjacent ceiling
Similar to enhancing STC by the use of sound absorbing materials in cavities between rooms, filling the
volume between the ceiling of a lower level room and the floor of an upper level room increases IIC,
particularly if the impact noises are due to hard-heeled footwear walking on hardwood floors that produce
higher frequency "click" sounds.

7.8 Flanking

As the example showed in Sec. 7.4 where it was discovered that a small gap between the floor and a door
with a high TL can significantly diminish the TL acoustic insulation properties provided by the door at the
threshold, it is clear that the weakest link of the overall sound transmission path often governs the acoustic
sound insulation characteristics between rooms. Such sound transmission that surmounts great efforts taken

137
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

to increase sound insulation between rooms, is referred to as flanking. In other words, the term flanking
refers to the weak links in the sound transmission path that permit easy flow of acoustic energy.

Direct path openings are a good example of flanking, such as gaps below doors. On the other hand, there
are numerous other flanking paths that challenge one to achieve large sound insulation between rooms [34].

• Partitions that do not extend the full height from floor to ceiling, but that terminate in height at
suspended ceilings or other artificial ceilings. Suspended ceilings cannot perfectly absorb sound,
so any sound energy that penetrates the ceilings will return to adjacent rooms through the ceiling
• Shared floors between adjacent rooms. A flanking path exists whereby sound may enter the floor
in one room and radiate into an adjacent room that shares the same floor. A remedy to this concern
is for adjacent rooms to have separate floor boards.
• Spaces where floors meet walls. To prevent flanking through floors, architectural features like
"crown molding" or raised floor boards edges are used to prevent the opportunity for sound to
radiate between the separate floor boards of adjacent rooms as illustrated in Figure 71
• Wall penetrations for electrical outlets. Outlets are supported by plastic or metallic boxes that are
notorious for easy sound transmission since these components reduce the sound transmission path
and have lower TL than the wall materials. Filling the cavities behind such outlet boxes with extra
sound absorbing materials may improve the concern. Placing electrical outlet boxes in a staggered
configuration between adjacent rooms also helps to remove more substantial sound transmission
concerns that arise when boxes are positioned back-to-back
• Short-length or direct HVAC ducts that connect rooms. You may have directly heard noise from
adjacent rooms due to short-length HVAC ducts that are shared. While this method of ducting is
efficient in terms of energy required to circulate air, a more maze-like ducting strategy is preferred
to enhance acoustical insulation between the rooms
• Door and windows. The thresholds between rooms supporting doors and windows significantly
reduce TL often by small gaps as well as lower overall TL of the doors and windows themselves.
Double-pane glass in windows, thick, heavy doors, and significant caulking around doors and
windows are typical means to reduce flanking concerns and keep the TL values of the overall
partition as high as possible.

In automotive engineering applications, particularly as addressed by noise, vibration, and harshness


(NVH) engineering teams, flanking paths for interior cabin noise in automobiles include aspiration
through door seals, acoustical leaks through the firewall and dash from the engine compartment, and
sound radiation through window glass [35]. Strategies to alleviate these concerns include double-bulb
and redundant door seals, narrow firewall openings and good cable management through the
passageway, and the use of glass that is laminated with a transparent viscoelastic layer in the middle
that provides broadband damping.

138
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 71. Top left, these partitions do not extend to the ceiling, nor even above the suspended ceiling. This
will not provide an effective acoustic seal between the spaces on either side of the partitions. Top right, caulk
used to better seal windows. Generally caulk is used for thermal reasons but the same sealing procedure
suppresses sound transmission through an important flanking path. At bottom, the floors extend below
"crown molding" at the floor-to-wall interface, and then are sealed with a sealant (caulk). This greatly
suppresses sound transmission by the floor-to-wall flanking path. Image sources http://roomdividers.org/
http://www.steelconstruction.info/ http://www.homeadditionplus.com/

139
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

8 Applications of acoustics: noise control and psychoacoustics

A primary motivation to study acoustics, the physics of sound, is that the subject is of considerable
importance to human hearing, which is an important human sense. As a result, it is desirable to integrate
the knowledge learned in this course into contexts where human hearing is of the utmost importance. This
section explores two human hearing-relevant applications: the control of "noise" and psychological factors
of hearing.

8.1 Engineering noise control

Engineering noise control refers to efforts to obtain an acceptable noise environment for a particular
observation point or receiver, by designing or controlling the noise source, the transmission path, the
receiver, or all three [12]. Domestic and international government regulations set standards for tolerable
noise exposure levels in certain public and private areas, for instance for the permissible sound levels of
residential areas which vary according to the time of the day. The Occupational Safety and Health
Administration (OSHA) in the USA sets the 1910.95(a) standard for "occupational noise exposure" levels
permissible for workers subjected to high sound pressure levels of noise during the work day. Sound levels
in excess of these limits are grounds for fines, bad publicity, and remedial actions.

8.1.1 Source-path-receiver methodology to engineering noise control


A key concept in engineering noise control is the means to tackle the problem at the source, along the
transmission path, or at the receiver. The priority of efforts taken to control the noise is in the same order:
address the source first, then modify the path if the source is insufficiently tailored, and finally give attention
to control the noise at the receiver as a last stage attempt.

It is desirable to address noise at the source since the direct sound radiated the ultimate origin of both direct
and diffused sound fields, and the latter is a result of the path. It is generally then a remedial effort to stop
it in its path. Modifying the path is only useful if the source noise cannot be adequately reduced. It is often
unacceptable in terms of the user experience to remedy a noise problem at the receiving end. This is because
there are few ways to control noise at the receiving end except by shielding the observation point. For
humans, the observation point are the ears and this implies we are asking the people to wear earmuffs or
ear plugs!

As one example, "hospital noise" generated by operating pumps in medical equipment could be alleviated
by designing quieter reciprocating compressors in the pump (tackling noise at the source). The noise can
be suppressed by mounting the pump onto the product equipment with highly dissipative materials and by
enclosing the pump in an acoustically sealed package (tackling noise along the path). The noise levels can
be reduced by requesting that the hospital guests wear ear plugs and other hearing protection (addressing
noise at the receiver). There are many instances where addressing the "noise problem" at the receiving end
is considered to be an inadequate solution, such as in this hospital example. Better engineering design
decisions early in product development processes can ensure that such sub-par remedial actions are not
needed.

140
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Example: Consider the noise created by the party occurring one floor below your downtown apartment.
What are the source-path-receiver aspects of this problem? How could you address each of them towards
controlling the noise so as to yield a quieter acoustic environment for you?

One Answer: The source originates as various sounds from the apartment. Addressing the noise at the
source requires you or another to request the party be "toned down" or by calling the police if the hour is
late and the sounds are "excessive" since (almost) all localities have public disturbance laws in effect after
around 10:00pm. The transmission path is the apartment building structure, mainly through building stud
and drywall vibrations and air gaps between your apartment and the exterior corridor. Addressing the noise
via the path would require building modifications, or by ensuring that your apartment has strong air-tight
seals around doors and at all corners of rooms. The receiver is the acoustic environment of "you", whether
that indicates you personally or the specific room in which you are situated. Addressing the noise in the
receiving room could be accomplished by placing more acoustically absorptive materials in the room
(pillows, blankets on hard furniture, and rugs on hardwood floors). Addressing the noise for "you"
personally requires hearing protection to be worn.

8.1.2 Noise exposure


As exemplified in Table 6, the range of SPL for common acoustic sources is significant. This indicates that
humans are exposed to a tremendous breadth of sound power levels in the course of day-to-day activities.
Excessive exposure to acoustic energy harms the human hearing sense. This section will describe this
characteristic in the context of formulating and implementing noise control strategies for a variety of
practical applications.

While the term noise-induced hearing loss (NIHL) is self-explanatory, its connotation that only noise can
cause hearing loss can result in misunderstanding. For instance, one may enjoy attending concerts where
the level of sounds is far beyond that which the human ear can withstand without harm. Nevertheless, the
term "noise control" is so often used that all unwanted or harmful sounds tend to be collectively referred to
as "noise", regardless of desirability.

The question arises, does the subjective equal-loudness contour, Figure 54, reflect damage-relevant
sensitivities of the human ear? Yes. Based on extensive data sets, the weighted [dB] scales, like [dBA], are
formulated to accommodate the human hearing sensitivities [14]. Although humans can lose hearing sense
via large short-term shock and concussions, a more widespread form of NIHL is through persistent exposure
to high levels of sound. Such noise exposure leads to hair cell death via fatigue failure [10]. Hair cells do
not regenerate like other cells such as skin cells. Because hair cell death is analogous to fatigue failure, it is
apparent that the hearing sense is deteriorated with increasing age, which is well known.

There are two types of regulatory standards to govern noise exposure. These represent the various noise
control criteria that designers, developers, and proprietors must meet.

Occupational noise regulations are set in the United States by the Occupational Safety and Health Act
(OSHA) to require employers to maintain certain permissible SPL [dBA] in work environments. The

141
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

permissible levels set by OSHA are set to balance inevitable hearing loss with satisfactory capability to
hear in day-to-day activities even if exposed to significant noise throughout 8-hour work days for the
duration of a career. This effectively becomes an attempt to regulate hearing loss in the framework of
economical business practices and adoption of modern technologies that may lead to lower noise levels for
workers.

Community noise regulations come about as owners of land and homes complain that the value of property
is diminished due to noise exposure. These are not always codified regulations. On the other hand,
community noise control practices are often guided by trained acoustical consultants who refer to available
standards and textbooks that list acceptable levels of community noise for different environments and times.
The large concrete noise barriers placed between new highways and homes are common evidence that
complaints motivated by sufficient data warranted municipal actions in the spirit of community noise
regulations.

Ultimately, in the United States the Environmental Protection Agency (EPA) became the administrator and
formulator of community noise regulations, with involvement from other government agencies and bureaus.
In the course of time, other agencies including federal and municipal groups have set community noise
regulations. As a result, the enforcement of such regulations is convoluted. Fortunately, accepted standards
are progressively being formulated with the help of the ASA and Institute of Noise Control Engineering
(INCE). Acoustical consultants are often called upon for expertise and recommendations to effectively
remedy community noise concerns.

8.1.3 Development for and enforcement of noise control criteria


Noise control criteria cover the whole range of concerns regarding noise exposure. The following serves as
a good example of the diverse way in which the criteria are handled and enforced, using a continuing
example of noise from automobiles and in residences.

• Automobile manufacturers must produce vehicles that do not exceed certain noise levels when
heard certain distances away from the vehicle, at least at the time that new vehicles are sold.
• Building construction must achieve certain ratings of transmission loss from outside noise levels,
such as that noise caused by traveling vehicle noise.
• Should the levels of noise increase over time due to changing circumstances that cause a resulting
noise level to be above the regulated criterion, then individuals exposed to the noise can seek
remedial actions by regulatory or government agencies and bodies. For instance the highway traffic
noise near a dwelling may become greater as years go by leading to excessive noise in the dwelling.
Therefore, residents are able to make complaints heard to the governing bodies and stimulate the
appropriate authorities to build highway noise barriers or take other countermeasures that ensure
the indoor noise levels return to standards.

It is a full circle of activity! Due to the significant number of them, only a few noise control criteria will be
elaborated upon in this course. As one engages in an activity that involves humans and radiated sound
fields, one can expect that there are regulations involved to govern the noise exposure of directly or

142
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

indirectly-received sounds. One should seek out the locality or other authorities for the appropriate
standards for the circumstances. Resources such as OSHA https://www.osha.gov/ and texts like [13] are
good starts, while colleagues in one's industry may already know the routines regarding who to contact for
further information. A web or Google maps search of "acoustical consultant" in the geographic region of
interest will likely find someone with the knowledge of relevant noise control regulations.

In general, to meet noise control criteria, one first needs knowledge of the relevant regulations to serve as
targets to meet. Then, one develops the product, designs the building, and so on, in the way that meets this
target. If the criteria refer to retroactive measures, then one makes decisions accordingly (e.g., constructing
highway noise barrier). For building construction and other domestic structure development, tables of
material parameters (like field incidence transmission loss in octave bands, Table 24) are available. For
new product development, such as for vehicles, corporations rely on high-fidelity models to understand the
influences of product design on the resulting radiated or received sound fields.

Example: The US Air Force Environmental Planning Bulletin 125 states that an office must have at least
a 30 [dBA] overall reduction in SPL if the outdoor level is 70 [dBA]. Using the building structures and
materials and reported TL values provided in Table 24, identify the lightest weight masonry wall type that
meets the criterion. If values are not reported in the table for a given octave band, assume that the choice is
insufficient for the required purpose.

Answer: To make the assessment, one must convert the values of TL in [dB] as provided in Table 24 to
[dBA]. These [dB] to [dBA] conversions are given in Table 15. Then, using (5.6.6), we compute the overall
reduction in SPL across the octave bands. The equation (5.6.6) is repeated again here for convenience
N
SPL = 10 log10 ∑10 SPLi /10
i

All of the single masonry walls (that have complete data sets) are adequate to provide 30 [dBA] of
attenuation. The minimum ranges from 44.99 [dBA] TL (Solid breeze or clinker blocks, unplastered) to a
maximum of 63.6 [dBA] TL (Hollow cinder concrete blocks, painted). The lightest weight solution is the
one with the smallest value of area density and thickness, which is 'Solid breeze or clinker blocks,
unplastered". It should not be surprising that the lightest weight solution also corresponds to the solution
with the smallest TL based on our understanding of the mass law (7.5.7) performance variation for panels.
Also, in general, the lighterweight panel constructions are the least expensive to fabricate and install.

8.1.4 Vehicle noise


Noise radiated from vehicles is the consequence of (1) tire-road interaction, (2) the exhaust system, (3) the
engine and drivetrain, and (4) aerodynamic effects of air flow over the vehicle body. The tire-road
interaction noise is caused by contact and release noise of the rolling tire on a surface, and to a smaller
extent the aerodynamic noise associated with turbulence around the tire. The exhaust system noise is related
directly to the periodic combustion of internal combustion engine cylinder firing, and this noise is generally
reduced along the transmission path to the tail pipes via the catalytic converter and muffler. The engine and

143
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

drivetrain noises are associated with direct sound transmission from the reciprocating equipment. The
aerodynamic effects of noise including higher frequency sounds such as whistling than can be heard as
turbulent air flow, vortices, and so on are generated due to the displacement of air around a moving vehicle.
Above very slow speeds, the importance of such sound sources to contributing to the overall noise is
indicated by the numbering above (thus, tire-road interactions predominates as the source of noise). More
recently for the emergence of hybrid or electric vehicles, higher pitched sounds associated with the power
electronics and switching are heard by observers external to the vehicle although these sound levels are
generally much less than noises associated with vehicles that have internal combustion engines.

In the United States, new vehicles sold from dealerships that are anticipated to be driven on public roads
must have traveling, radiated noise levels of SPL<80 [dBA]. The Federal Highway Administration method
to quantify this relevant noise level is shown in Figure 72. The vehicle travels on a hard surface, while a
microphone is 15 [m] away. The microphone is directed normal to the line of vehicle travel. The
microphone must be 1.5 [m] above the hard road surface. For the time duration in which the vehicle travels
60 [m] and is closest to the microphone, the SPL is averaged.

For passenger cars and light trucks at cruising speeds, this measurement at 15 [m] distant from the noise
source path is seen to follow the trend SPL=71+32log10( v /88) [dBA], where v is vehicle straight line speed
[km/h]. This is effectively a tire-road interaction noise measurement.

For trucks, especially those with diesel engines, engine noise substantially contributes to the overall noise
levels, even at faster vehicle speeds. The corresponding sound pressure level for trucks is SPL=84 for v
≤48 [km/h] and SPL=88+20log10( v /88) for v >48 [km/h].

144
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 72. Federal Highway Administration measurement method for overall traveling vehicle noise level
http://www.fhwa.dot.gov/environment/noise/measurement/mhrn05.cfm.

Example: Determine the approximate, overall SPL [dBA] heard by a receiver 100 [m] from a passing
passenger car at 70 [mi/h].

Answer: First, convert 70 [mi/h] = 112.65 [km/h]. Then, the SPL=74.4 [dBA] at 15 [m]. To find the SPL
at 100 [m], we need to take into account the RMS pressure difference at the greater distance.

prms ,15 m = 0.1053 [Pa]

15
=0.01580 [Pa], leading to an overall SPL=60.0 [dBA]. As a check to this
Then, prms ,100 m = prms ,10 m
100
computation we recall that for every doubling of distance, the SPL decreases by 6 [dB]. From 15 to 100
[m], there are >2 doublings of distance. Thus, the SPL at 100 [m] should be greater than 12 [dBA] but less
than 18 [dBA] reduced from the SPL at 15 [m]. That would give a ballpark SPL of between 56.4 and 62.4
[dBA]. As expected, our computed value of 60.0 [dBA] lies within this range.

8.1.5 Speech interference


Noise decreases the intelligibility of speech. The interference to communication caused by noise results in
human frustration, annoyance, irritation, and other economic and social impacts. When oral communication
is inhibited, the efficiency of an employee is reduced and error can result by way of miscommunication.

145
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

The speech interference level (SIL) is a common means to assess the level of noise that inhibits speech
communication. The SIL is also used to anticipate how loud one must raise a voice to be intelligibly heard.
Considering the SPL [dB] in the 500, 1000, 2000, and 4000 [Hz] octave bands, the SIL is computed from

1
SIL
= ( SPL500 + SPL1k + SPL2 k + SPL4 k ) (8.1.5.1)
4

An approximate SIL may be computed from the A-weighted value of the same average

SIL=SPLA-7 [dB] (8.1.5.2)

Then, for intelligible communication, the range of permissible SIL that are recommended are

SIL≤ −20 log10 r + 58 [dB], for r>8 [m] (8.1.5.3)

SIL≤ −29 log10 r + 66 [dB], for r≤8 [m] (8.1.5.4)

for male voices. For female voices, the SIL values must be reduced by 5 [dB].

The voice level (VL) [dBA] is measured at a distance of 1 [m] in front of a speaker using a "slow", A-
weighted recording of speech. Yet, considering the background noise and associated SIL, the VL required
to ensure "just-reliable" communication is determined according to equations (8.1.5.5) and (8.1.5.6). "Just-
reliable" is considered to be that 1 of 20 sentences becomes unintelligible in consequence to the interfering
noise [1].

VL≥SIL +20 log10 r +6 [dBA], for r>8 [m] (8.1.5.5)

4
VL≥ (SIL +20 log10 r )-13 [dBA], for r≤8 [m] (8.1.5.6)
3

Note that the SIL in (8.1.5.1), (8.1.5.2), and (8.1.5.4) use [dB] while the VL is a reported [dBA] value.

Finally, human subjectivity denotes that voice levels span a range as shown in Table 18. Starting from the
measure of noise prescribed in (8.1.5.2), one may determine if the occupants of an environment will be able
to speak in a normal voice to be intelligibly heard or if, alternatively, a raised voice is required to get the
message across.
Table 18. Types of voice and the associated voice levels VL [1].

Quiet
Voice level and
conversational Normal voice Raised voice Very loud voice Shout
speech type
voice
voice level VL
57 64 70 77 83
[dBA]

Example: To provide just-reliable communication for men and women in a workplace at a normal voice
level with r =2 [m], what is the permissible noise level in the 500 to 4000 [Hz] octave bands?

Answer: Using Table 18 and the formula, we find that for male voices

146
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

4
64= (SIL +20 log10 2 )-13, SIL=51.7 [dB]
3

For female voices, the corresponding SIL=46.7 [dB], having reduced the value of SIL for male voices by 5
[dB]. Then, the SPLA of background noise permissible for male voice intelligibility is found, via (8.1.5.2),
to be 58.7 [dBA] while for female voices the SPLA of permissible background noise is 53.7 [dBA]. Taking
these numbers into consideration, in Scott Lab the offices have background noise levels 40 [dBA] or less
while some labs have background noise of 60 [dBA] or greater (primarily due to noise related to HVAC
ducting).

8.1.6 Noise criterion for rooms


The Noise Criterion (NC) ratings were formulated in 1957 by Leo L. Beranek [17]. The NC ratings establish
guidelines for acceptable background noise levels in unoccupied rooms when all mechanical systems are
operating. The NC ratings were ultimately written into the standard ANSI-S12.2. Following the
development of more efficient HVAC systems that projected considerable low and high frequency sounds,
the standard was revised and the modern version is ANSI/ASA-S12.2-2008. The NC rating curves are
shown in Figure 73.

To compute the NC rating for a given space, the following steps are undertaken.

1) Sound pressure level is measured in the room at either standing or sitting heights for adults. The
measurement microphone must be distant from strongly reflecting surfaces
2) The "slow" (at least one-second average) octave band SPL values in [dB] are determined
1
3) The SIL is computed from the definition SIL
= ( SPL500 + SPL1k + SPL2 k + SPL4 k ) , (8.1.5.1)
4
a) SIL-based result. Consider the NC-<SIL> curve of Figure 73. If the measured spectrum does not
exceed the NC-<SIL> in any octave-band, then the NC rating of the space is the SIL value.
i) If the measured spectrum exceeds the NC-<SIL> in one or more octave-bands, then the
"tangency" method is used to determine the NC rating
b) Tangency method. The measured octave-band spectrum of the space is plotted over the NC curves.
The highest-valued NC curve (designated M) that the measured spectrum exceeds is then identified.
The extent to which this SPL level exceeds the NC curve (M) is linearly interpolated between the
curve and the next curve up in greater NC rating (designated P). This linear interpolation of the
excess (above M and below P) is then added to the NC rating of (M). The NC rating of the room is
therefore determined as NC-(M + linear interpolation of excess [dB]).

An example is provided in Figure 74 to illustrate the method.

147
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Noise Criterion (NC) curves


90

80

70
NC-70
sound pressure levels, [dB]

NC-65
60
NC-60
NC-55
50
NC-50
NC-45
40
NC-40
NC-35
30
NC-30
NC-25
20
NC-20

10 NC-15
1 2 3 4
10 10 10 10
octave band frequencies [Hz]

Figure 73. Noise criterion (NC) curves are reproduced from the tabulated data in ANSI/ASA-S12.2-2008

Table 19. Code used to generate Figure 73

% ANSI/ASA_S12.2_2008 noise criterion curves, NC first column is the


% rating, following columns are in the octave band center frequencies
NC=[70 90 90 84 79 75 72 71 70 68 68;
65 90 88 80 75 71 68 65 64 63 62;
60 90 85 77 71 66 63 60 59 58 57;
55 89 82 74 67 62 58 56 54 53 52;
50 87 79 71 64 58 54 51 49 48 47;
45 85 76 67 60 54 49 46 44 43 42;
40 84 74 64 56 50 44 41 39 38 37;
35 82 71 60 52 45 40 36 34 33 32;
30 81 68 57 48 41 35 32 29 28 27;
25 80 65 54 44 37 31 27 24 22 22;
20 79 63 50 40 33 26 22 20 17 16;
15 78 61 47 36 28 22 18 14 12 11]; % [dB] octave band sound pressure levels measured in slow rating from meter
octave_bands=[16 31.5 63 125 250 500 1000 2000 4000 8000]; % [Hz]
figure(1);
clf
plot(octave_bands,NC(:,2:end),'-o');
hold on
for iii=1:size(NC,1)
text(10e3,NC(iii,end),['NC-' num2str(NC(iii,1))]);
end
xlabel('octave band frequencies [Hz]');
ylabel('sound pressure levels, [dB]');
set(gca,'xscale','log');
xlim([10 20e3])
title('Noise Criterion (NC) curves')

148
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 74. Example of NC rating identification for two cases shown as red and blue data points of measured octave-band
SPL. The SIL for the room (red data points) with ultimate rating NC-56 is SIL=45 [dB], but the individual values of the red
SPL measurements exceed the NC-45 curve, so the tangency method is used to determine the final rating. The SIL for the
room (blue data points) with ultimate rating NC-37 is SIL=30 [dB], but the individual values of the blue SPL measurements
exceed the NC-30 curve, so the tangency method is used to determine the final rating.

Subsequently, the suitability of a given space and room is examined according to recommendations put
forward in the ANSI/ASA standard. A few of these recommendations are provided in Table 20.
Table 20. Recommendations for NC ratings from ANSI/ASA-S12.2-2008

Occupancy Recommended NC range

Concert hall, opera house, recital hall 15-18

Large auditoriums, large theaters, large churches 20-25

Private residences, bedrooms 25-30

Private residences, family rooms 30-40

Classrooms <20,000 [ft3] 25-30

Individual hotel room 30-35

Small, private office in office building 35-40

Executive suite office in office building 30-35

Conference room 25-30

Movie theater 30-40

Restaurant 40-45

Shops, mechanical garages 50-60

149
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

8.1.7 Community reaction to noise


There are times when one needs to anticipate how a community will react to a new noise source in the
environment. For instance, a local government may need to anticipate how an upcoming construction
project near a residential area may influence the neighborhood tolerance of the noise and activity. Also, a
contractor may need to anticipate how a renovation project within an office on the 11th floor of a building
may be perceived by occupants of the 10th and 12th floors during the heavy construction phase. In these
examples, the individuals who make the new noises are encouraged to consider the potential impact of the
noise on the nearby community, and how likely it is that the community may adversely respond to the
unwanted sounds.

The method to develop for the desired community reaction to noise is as follows. This is drawn from the
recommendation of ISO R1996 (1971).

• Determine the [dBA] of the associated sound pressure level that impacts the community. In other
words, this is the SPL [dBA] that is heard by the receivers. In the example above, the contractor
would take a microphone measurement from locations within the 10th and 12th floors to acquire
this information.
• Add/subtract the [dBA] corrections from Table 21 according to the respective characteristics of the
sound involved
• The community noise reaction is then anticipated to be according to that shown in Table 22
Table 21. Corrections added to the overall SPL [dBA] of the noise that affects the community.

Noise characteristics Correction [dBA]


Pure tone present +5
Intermittent, impulsive +5
Noise only during working hours -5
Noise from 6 pm to 10 pm +5
Noise from 10 pm to 6 am +10
Total duration of noise each day
Continuous 0
Less than 30 minutes -5
Less than 10 minutes -10
Less than 5 minutes -15
Less than 1 minute -20
Less than 15 seconds -25
Neighborhood
Quiet suburban +5
Suburban 0
Residential urban -5
Urban near some industry -10
Heavy industry -15

150
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Table 22. Anticipated community response to noise.

Corrected SPLA that affects the community [dBA] Anticipated community reaction
< 45 No reaction
45 < SPLA < 55 Sporadic complaints occur
50 < SPLA < 60 Widespread complaints develop
55 < SPLA < 65 Threats of community action
SPLA > 65 Vigorous community action occurs

An alternative conservative measure was proposed by Schultz [36]. Based on significant number of surveys,
the following relationship was developed for the "percent highly annoyed" community members when
subjected to a given overall SPLA.

% = 0.036SPL2A − 3.27SPL A + 79 (8.1.7.1)

This would found to be accurate, respecting to the surveys, to within about +/- 5 [dBA] of the noise level.

Example: You enjoy watching The Matrix Trilogy. Yet, you are aware that its intense soundtrack filled
with impulsive sounds may pose troubles for your planned midnight-timed trilogy viewing with friends
since you live in an apartment complex. Consider that the sound is loudest for only about 10 minutes of any
given film and that there are many impulses in the audio and soundtracks of the films. You aim to limit the
likely community response to only a sporadic chance of complaints, in the hope that no neighbor is
frustrated enough to call the police. What is the loudest mean sound pressure level (A-weighted) that you
can permit the film to be played back by your high-fidelity surround-sound system?

Answer: According to the desired likely community reaction of noise in Table 22, you need to limit the
SPLA to 55 [dBA], as heard in the neighboring apartments. We use Table 21 to account for the corrections
to this limit. The sounds will be impulsive, so we subtract 5 [dBA] from the 55 [dBA]. The sounds will
occur around midnight: another 10 [dBA] is removed. Yet, the noise will only occur at most for 10 minute-
long durations, so we can add 10 [dBA]. There is no particular correction to be applied in closely-packed
apartment complex. Thus, you can turn up your Matrix viewing to approximately an overall SPLA of 50
[dBA].

8.1.8 NIHL and occupational noise


OSHA prescribes that companies in the U.S. must conform to the noise exposure standards provided in
Table 23. These are the noise exposure levels and time durations permissible for workers to be subjected to
in the normal undertaking of employment. If the standards are not met, enforcement proceeds to ensure that
corrective actions are undertaken to achieve regulation.

The standards do not prevent NIHL. The levels are set to prevent NIHL from occurring for 85% of
employees, within the important frequency range of 250 to 4000 [Hz] that involves spoken communication,
and thus worker efficiency. In effect, the OSHA standards are a regulation of hearing loss among the U.S.
workforce.

151
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Table 23. OSHA regulations for occupational noise. From OSH Act of 1970.

SPLA, slow measurement [dBA] Permissible daily exposure duration [hours]


90 8
92 6
95 4
97 3
100 2
102 1.5
105 1
110 0.5
115 Less than 0.25

Example: Imagine that you operate a firm that directs building renovations, and are subjected to OSHA
regulations. Your employees regularly use power tools that expose them to 95 [dBA] for approximately 3
hours per day. For a new contract, the employees will work in an outdoor urban setting, where there will
be baseline traffic noise 75 [dBA] throughout the 8-hour working day and a fluctuation in addition to this

level by 10 sin π t [dBA] to account for traffic sounds fluctuations during the work day t = [ 0,8] [hour].
4
How loud must the baseline traffic noise need to be for your employees to be subjected to a noise level that
exceeds the OSHA regulation for the 3-hour duration period of peak traffic noise?

Answer: As a worst-case scenario, we presume that the employees are using the power tools during the
period of time at which the traffic noise is within the daily peak fluctuation. This problem may then be
solved in reverse by considering the 3-hour exposure criterion given in Table 23 of less than 97 [dBA]. If
we subtract the 95 [dBA] associated with the power tools using the standard logarithmic summation
methods of [dB] values, we find that another noise source must not exceed 92.7 [dBA].

= 10 log10 1095/10 + 10 traffic  → SPLtraffic =92.7 [dBA]


SPL /10
97

The 92.7 [dBA] is the average SPL of the traffic noise in the 3 hour window where the fluctuation of the
traffic noise is greatest. In this period from 0.5 to 3.5 working time [hours], the fluctuation averages an
additional 7.8 [dBA] increase from the baseline. Thus, subtracting this amount from the level 92.7 [dBA]
gives the maximum baseline traffic noise needed such that the noise exposure for the 3-hour duration will
just meet the OSHA regulation. Therefore, 92.7 - 7.8 = 84.9 [dBA]. If your workers are exposed to greater
than 84.9 [dBA] baseline traffic noise, then with the combination of the power tools and the hourly
fluctuation of traffic noise all together the OSHA regulation will be violated and your workers could legally
take action against the supervisor (you!).

152
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

85
traffic noise level [dBA]

80

75
0 1 2 3 4 5 6 7 8
working time, duration [hours]

8.1.9 Source-path-receiver methodology for noise control engineering


In the introduction of this course Sec. 2.4, we emphasized that applications of noise control engineering
adopt a source-path-receiver methodology of characterizing, understanding, and then responding to noise
sources in the environment.

It is most desirable to stop the source of the noise. It then becomes a remedial effort to stop noise travelling
along the path to the receiver. It is generally unacceptable, although it becomes necessary, to reduce noise
at the receiver or observation point.

Using measurement techniques described in Sec. 6, the Secs. 8.1.4 to 8.1.8 help us characterize the noise
source and the likelihood that it will become a community or occupational concern. To address the noisy
source is ordinarily an issue associated with mechanical or structural product or system development
alongside advanced models of the system structural acoustics to understand how the system will interact
with the acoustic fluid. For example, scientists and engineers at Michelin determine sound radiation
characteristics from new generations of tires when rolling along various surfaces, which helps to inform
tread design features that may minimize radiated sound. Computational modeling and experiments are
likely to be carried out in parallel to characterize the interrelations between efficient tread design and the
noise radiation. Such efforts are effectively tackling noise at the source.

Secs. 5 and 7 help us understand common ways of characterizing and tailoring the relation between acoustic
sources and the receiver via the transmission path. For example, high acoustic intensity at two out-of-phase
sources is not necessarily disadvantageous for the receiver who is equi-distant from them, due to the
cancellation of pressure waves at the observation point. Thus, the path is evaluated for ways in which noise
may be reduced at the receiver. Because a diffuse sound field cannot be remediated with phase cancellation
methods, panels that provide absorptive and sound-insulating properties are employed to suppress diffuse
field sound transmission and build-up.

It is the least desirable approach to alleviate noise concerns at the receiver. On the other hand, advancements
have been made to design highly absorptive ear plugs, ear muffs, and similar devices as personal protective
equipment for humans. For engineering systems exposed to extreme noise levels that are not otherwise
abated in the transmission, it is sometimes determined to directly apply foams or damping layers over the

153
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

surface of the vulnerable system to increase the dissipation of acoustic energy to the object. For instance,
in space launches the sound is so great that after several conversions from acoustic wave to vibration to
acoustic wave and so on due to propulsion sound, the sound energy inside of fairings is still great enough
to damage engineered systems inside. Thus, damping layers are occasionally added at the receiving point.
Viscoelastic materials like rubbers or elastomers are often used for such purposes.

To summarize, addressing noise control concerns is achieved by the following prioritized approach.

(1) Source: Stop or suppress noise emission at the source.


(2) Path: Inhibit noise transmission through the path, such as its passage through material layers.
(3) Receiver: Protect the receiver with last-ditch remedial measures, including hearing protection
equipment or other local shielding barriers.

8.2 Psychoacoustics

Humans hear with ears, but there is much more to "hearing" than merely the detection of acoustic pressure
waves. This section will cover important introductory topics of psychoacoustics including binaural hearing
and principal hearing phenomena. Psychoacoustics is a branch of acoustics related to psychological and
physiological responses associated with sounds. Thus, perception of sounds, in a broad interpretation, is the
focus of psychoacoustic studies. Those with greater interest in this field of study are encouraged to consider
works properly focused on psychoacoustics [7].

8.2.1 Binaural hearing


The two ears of humans provide a wealth of information due to the fact that the ears are at different positions
in space relative to a single source of sound in the field. The use of two ears to characterize sounds in a
field is called binaural hearing, sometimes also stereophonic hearing. We perceive a source of sound
respecting its location that is denoted by an azimuth angle φ , which is a rotation about the "z" axis (our
height dimension) as shown in Figure 75. The elevation at which the source exists in the field is denoted
by the angle θ normal to the azimuthal plane.

154
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 75. At left, schematic of notation for elevation and azimuth angles of sound source with respect to receiver. At
middle, schematic to illustrate time-of-arrival for plane waves incident to human head. At right, schematic of inability to
resolve frequency by ITD for high frequencies.

In addition to visual cues, we identify the position of acoustic sources, termed source localization, using
several strategies. For sounds with varying azimuthal locations in the azimuthal plane Figure 75, we localize
sources using interaural time differences (ITD) and interaural level (or intensity) differences (ILD) (or
IID).

8.2.1.1 Interaural time difference


Interaural time differences exist because a delay of the phase of the incoming wave from an acoustic source
occurs in the time required to impinge on one ear and then on to the other ear. Thus, the ITD is the time
delay between the same phase of a wave arriving at one ear and then at the other. Consider the middle
schematic of Figure 75. Assume the human head shown in the schematic is comparable to a rigid sphere
with two ears that are oppositely positioned on the sphere surface in the azimuthal plane. The angle-of-
arrival is β . When the source is directly in front of the receiver, the angle-of-arrival is β = 0 , and there
is no phase delay in incoming waves arriving at the two ears. But as the source is repositioned, the angle-
of-arrival is β ≠ 0 so that a delay between the instants when the same phase of a wave arrives at the two
ears due to wave travel around the head.

Assuming a spherical head, a sound speed of c =343 [m/s], and a typical radial distance r =90 [mm] from
head center to approximate opening of the ear (entrance of the auditory canal at the concha), the ITD is
found to be

r ( β + sin β )
ITD= (8.2.1.1.1)
c

The greatest ITD that can occur between the ears corresponds to an angle of arrival β = π / 2 or 3π / 2
[rad]. Using (8.2.1.1.1), the greatest ITD is approximately 674 [μs].

155
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

The relative importance of the use of the ITD for sound source localization in the azimuthal plane is
dependent upon the frequency of the acoustic wave. Using ITDs, a wave arrives first at one ear at a certain
phase, and that same point of phase of the wave carries to the other ear. This is shown in the right schematic
of Figure 75 for the low frequency (long wavelength) at top. In this way, the ability to localize a "sound
from the right" is enabled.

Yet, at high frequencies there are ambiguities as to the time delays that occur between two respective points
of common phase for an acoustic wave arriving at one ear and then the other. Consider the short wavelength
case shown at the bottom right of Figure 75. There are two possible time delays that could occur to
reproduce the phase from the right to left ear in the direction of wave travel. Thus, the maximum phase
delay of an acoustic wave that can occur in the course of wave travel from one ear to the other, that may be
effectively resolved in its source location, is Φ ITD =180°.

To compute the approximate highest frequency at which our ears can resolve a phase difference between
then, we use a modified version of (8.2.1.1.1) to consider the wave phase that may exist between the ears.
We note that a phase angle Φ has the same units as ωt so that we multiply the (8.2.1.1.1) time by the
frequency.

2π fr ( β + sin β )
Φ ITD = (8.2.1.1.2)
c

Because the max Φ ITD =


π [rad], we can compute the highest frequency f at which the ITD is meaningful
in our determination of source localization. Using the numbers given above, we find that the frequency is
approximately f =741 [Hz]. In other words, the highest frequency range for which ITD is used by humans
to localize sound sources in the azimuthal plane is around 750 [Hz].

8.2.1.2 Interaural level difference


Of course, sound sources in our day-to-day world are often at frequencies greater than 750 [Hz]. Yet, we
are still readily able to localize sound source location at higher frequencies. This suggests that humans use
other mechanisms to identify sound source locations in the higher frequency regime.

Recall that sound waves incident upon barriers are stopped from direct passage and diffract around the
edges. The significance of diffraction is dependent upon the wavelength under consideration. For acoustic
pressure waves of most amplitudes that humans can sustain, the human head is like a near-rigid barrier
respecting the wave. Thus, for the case illustrated in the middle schematic of Figure 75, at high frequencies
the sound received by the right ear will be louder than that received by the left ear due to the relative amount
of acoustic shadow at the left ear that occurs and due to (a) the sound level decay involved in the additional
wave travel distance and (b) barrier effects of the head. Respectively, the contribution of the pressure wave
amplitude decay due to the additional travel distance is negligible compared to the blocking and barrier
effects provided by the head. Therefore, the interaural level difference ILD, also termed the interaural
intensity difference IID, is mostly associated with diffraction and blocking behaviors that the human head
provides for incident acoustic waves that arrive at both ears according to the different paths of travel around

156
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

the head. The ILD is the SPL difference between the same phase of a wave arriving at one ear and then the
other. ILD is usually reported in [dB].

The ILD is distinct from one person to another because every person (and head!) is different. Yet, a model
of a human head as a rigid sphere can be used to predict the significance of the level difference between
two opposite sides of the sphere. At 1 [kHz], the ILD for a sound that is incident at β = π / 2 can be
approximately 6 [dB], while at 10 [kHz] the ILD can become around 20 [dB] [37].

In Sec.1.4.5, we discussed diffraction and found that the ratio of acoustic wavelength to barrier size around
3/2 was a value such that the significance of diffraction effects becomes meaningful. Here, the barrier
dimension is the head diameter. Using the earlier parameters, the diameter of the human head from eardrum
to eardrum is approximately 180 [mm]. Thus, the relevant acoustic wavelength below which (i.e. higher
frequencies) the diffraction effects become prominent, and thus ILD becomes important, is around λ =270
[mm]. Converting this to frequency by λ = c / f =1260 [Hz].

While the max ITD is related to the phase difference that exists between the ears at a given frequency and
thus sets a strict limit, the lowest limit for ILD effects is not strict. In practice, there is a measurable ILD
at frequencies as low as about 200 [Hz] but it is not detectable by our hearing sense until approximately
700 [Hz] and higher. Recall from Table 14 that about a 5 [dB] change in SPL is required for humans to
recognize a difference in loudness. This is a similar limitation in terms of difference in SPL between ears
required to recognize that the SPL between our ears is different. As indicated above, the ILD is therefore a
meaningful strategy to localize sounds that have frequencies of about 1 [kHz] and greater.

8.2.1.3 Surmounting the cone of confusion using pinnae, torso, head motion
These strategies of sound source localization pose a limitation: ITD and ILD are only unique in the
azimuthal plane. Thus, by ITD and ILD alone there is a difficulty to localize sounds of different elevation
angles θ . This generates a cone of confusion, Figure 76. In other words, with a given ITD and ILD one
would hypothetically not be able to distinguish a sound from an elevation angle of θ because such ITD and
ILD are shared for any such θ . The cone of confusion occurs due to the sphere-like, partially-symmetric
shape of the head and how sound diffracts around a sphere. Note that for an azimuthal angle φ =0, human
hearing by ITD and ILD alone has no means to differentiate locations of sound sources varying according
to the elevation angle θ .

157
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 76. Illustration of cone of confusion.

So how do we localize sounds in elevation and differentiate them within the cone of confusion? In fact,
humans use multiple mechanisms to localize sound sources whose waves arrive outside of the azimuthal
plane.

First, our ears filter sounds using our pinnae. The pinna is the soft flesh of the outer ear that acts like a
collector to direct sound into the auditory canal and towards the eardrum, Figure 77. Singular: pinna. Plural:
pinnae. Indeed, the topology of our ear provides significant frequency filtering effects. This is a filter in the
sense that our ear is not equally sensitive to all frequencies of acoustic waves. In other words, similar to the
microphone sensitivity in Figure 49, there is a variation of ear sensitivity to sound due to the pinna shape
and direction of wave incidence. There is no encompassing way to characterize all of the influences of
pinnae on hearing since pinnae are different from one person to the next. But, suffice it to say that there is
a tremendous influence of pinnae on human hearing and sound localization at mid to high frequencies,
which spans the same frequency band where ILD is important.

Figure 77. Singular: pinna. Plural: pinnae. Taken from https://en.wikipedia.org/wiki/Auricle_(anatomy) which is from the
public domain work Henry Gray (1918) Anatomy of the Human Body

158
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

For example, Figure 78 shows, at left, the "right ear" SPL measurements taken from a sphere as a function
of the angle-of-arrival, where the red (blue) colors indicate high (low) SPL. These measurements consider
results for elevation angle θ =0. Thus, around 90°, the SPL is at a maximum since the source results in plane
waves immediately upon the right microphone position. Diffraction effects are apparent when the source is
at an angle-of-arrival of 180 to 360°. On the right of Figure 78 are shown measurements taken from the left
ear of a KEMAR manikin at the elevation angle of θ =0 (Knowles Electronic Manikin for Acoustics
Research http://www.gras.dk/products/head-torso-simulators-kemar.html). KEMARs are manikins with
microphones positioned at approximately the locations where the eardrums pick up acoustic pressure.
KEMARs also have soft pinnae similar to a human, Figure 79, to better emulate the spectral filtering of the
human pinnae. Thus, measurements of sound pressure taken with a KEMAR are representative of the
nuances involved in human hearing. While there are apparent similarities between the spectral filtering
provided by a sphere for sounds impinging on the "right ear" when compared to the sounds impinging on
the "left ear" of the manikin, it is clear that the pinnae of the KEMAR manikin perform considerably more
nuanced filtering than that associated with sound diffraction and attenuation around a nearly-rigid sphere.
The results in Figure 78 at right indicate that the pinnae significantly filter the sounds received by the
microphone (ear), particularly for sounds heard dead-on around the angle-of-arrival 270° for the left ear.
Yet, in the diffracted area, 0 to 180° for the manikin, which is comparable to the diffracted area of 180 to
360° for the sphere (since the data are taken for different "ears"), there are clear similarities in the 1 to 4
[kHz] bandwidth.

Thus, pinnae provide many level-based cues regarding source localization by way of frequency filtering.
Although the results in Figure 78 only apply to the elevation angle θ =0, shaping of the sound levels occurs

Figure 78. At left, "right ear" microphone measurement from a sphere as a function of azimuth angle where the
"source azimuth angle" was defined as angle-of-arrival. At right, "left ear" measurement from a KEMAR manikin
as a function of the same source azimuth angle measure. Similarities are clearly seen for the sound that is diffracted,
which is 180 to 360 ° at left and 0 to 180 ° at right. Results are from [40], some of which are measured by this author.
Here blue colors are low SPL and red colors are high SPL.

159
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

in other elevation angles in similar, but intricately different, ways. Each such spectral influence results in a
means for the human to distinguish sounds arriving from a given pair of elevation θ and azimuth φ angles.

In addition to frequency filtering by the pinnae that aid in sound source localization out of the azimuthal
plane, the torso provides reflection and diffraction effects that contribute to aid source localization. For
instance, the measurements of the KEMAR shown in the right of Figure 79 include the torso wearing a polo
shirt. The influences of the torso are difficult to represent because in part reflections are induced that arrive
at the ears in addition to the directly radiated sound. Based on the distance traveled, these reflections cause
shift in amplitude and phase of the direct line-of-sight wave.

The collective relation between a source of sound pressure p (θ ,φ , ω ) and that received at the ear pear is
referred to as the head-related transfer function (HRTF). The HRTF is employed according to the equation

 pear , right   HRTFright (θ ,φ , ω ) 


p =  p (θ ,φ , ω ) (8.2.1.3.1)
 ear ,left   HRTFleft (θ ,φ , ω ) 

Note that this is a transfer function and is valid in the frequency domain. Thus, the inverse Fourier transform
must be taken of the HRTF for a given ear so that the transformation may be utilized in the time domain.

Why would we want to use the HRTF in the time domain? There are many applications where sound must
be played back in a way to provide virtual or enhanced localization capabilities. This includes transmitting
messages in environments with loud noise where speech needs to be heard clearly in which case its arrival
from locations distinctly and virtually positioned in space can be more readily recognized by the receiver,
the use of "surround sound" audio in tandem with visual media (often film) to create more immersive
environments for the audience, and other contexts.

Figure 79. KEMAR photographs. A microphone opening is shown in the pinna of the left ear of a KEMAR. At far
right is a KEMAR with a headset on, to determine how the headset occludes sound reaching the ears. Images by
GRAS Sound & Vibration and by the author.

160
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

In some of these applications, the target sound is convolved with the time domain version of the HRTF
(now called the head-related impulse response HRIR) for a given ear in order for the played-back sound
heard by the ear to be perceived as if it arrives from another location. Thus, to create a virtual stereo sound
field that reproduces hearing phenomenon of sources in space, Figure 80, the HRTF equations

 pear , right   HRTFRR HRTFRL   p1 (θ1 ,φ1 , ω ) 


p =   (8.2.1.3.2)
 ear ,left   HRTFLR HRTFLL   p2 (θ 2 ,φ2 , ω ) 

are inverted to solve for the source pressures in the frequency domain pi (θi ,φi , ω ) . Then the inverse
Fourier transform is taken to determine the time domain pressure required from each source in order to
reproduce a desired pressure at each ear pear . For speakers of constant (or anticipated) position with respect
to an audience, such as in a theater or earphones for a personal music player, this procedure can be
performed in advance. This is how sound is recorded and then strategically mixed, such that when we play
that audio track back on our iPhone, iPod, or as played-back in a movie theater, the sounds appear to come
from all around us, rather than from "inside" our head or from strict locations of speaker mounting, e.g. in
a movie theater. Indeed, some recording studios measure the music from a band performance with
microphones spaced about 180 [mm] apart (i.e. the distance between an average human's auditory canal
openings), compensate with amplitude "shading" between microphones to better simulate ILD, and use this
weighted mixture in the ultimate music recording put to into the album.

But for real-time applications that may require a more personalized reconstruction of sound fields for
subjects wearing ear-occluding headsets, such as to enhance message transmission in noisy environments
including hospitals and military applications, it is a severe computational burden to: (i) invert (8.2.1.3.2) of
the personalized HRTF which changes as the subject moves head position, (ii) take the inverse Fourier
transform to determine the HRIR, (iii) convolve that HRIR with the incoming pressure signal, and (iv) then
play that back through the headset (v) before enough time delay has occurred such that the person detects
that there is a lack in the audio that does not agree with visual cues.

For speech, intelligibility in communication can be maintained even if delays from visual cues approach
large time differences around 80 [ms] [7]. Greater time delays will cause a conflict from the visual cue such
that the listener will have trouble understanding; consider the example that what we see at a speaker's
mouth/lips will conflict with what we hear. The conservative value of delay that is used is as an upper bound
for that which is tolerable by human listeners without becoming noticed is often 20 [ms]. In other words,
the steps (i)-(v) must be computed for real-time audio signals in no more than 20 [ms]! It is a true
computational burden.

For greater understanding of the methods of digital signal processing described above, interested
individuals are encouraged to take a DSP course or consider one of many informative texts on the subject
[32] [38] [39].

161
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 80. Head-related transfer functions, exemplified for stereo sound.

Finally, head movement enhances the ability to localize sounds whose positions are out of the azimuthal
plane. By moving our heads, we enrich the amount of information our brain receives regarding phase and
level differences for sources of sound. Certain animals are more elegant in this procedure and move ears
while keeping the head pointed in one direction, Figure 81, which is only possible because the pinnae can
be more directly actuated and turned by muscles than the pinnae of humans. The motion of the head with
respect to a sound source can dramatically increase the intelligibility of speech even when it is drowned in
noise or other conversation. Thus, in a noisy restaurant or bar, moving ones head around slightly as a friend
speaks will enhance one's ability to intelligibly hear the message.

The reason that music played-back with headphones often appears as if it is coming from "inside the head"
is because the sound source at our ear moves with our head motions. Advanced headphones that account
for head motions to keep the effective sound source at one position provide unique aural experiences.
Representative products in this spirit are made by Ossic https://www.ossic.com/ and 3D Sound Labs
http://www.3dsoundlabs.com/, while the processing technology is also being developed by groups such as
AM3D http://www.am3d.com/ and AuSim http://www.ausim3d.com/. These headphones and signal
processing technologies are working examples of the procedures (i)-(v) described above that convolve the
HRIR with incoming sound sources to synthesize a sound field similar to unoccluded hearing. "Occluded"
hearing indicates that one is prevent, blocked, or obstructed from hearing normally, and thus these class of
headphones "restore" the natural hearing sense by virtue of the digital signal processing.

162
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 81. Certain animals use significant motion of ears to assist in sound source localization. Footage from BBC
https://youtu.be/5ThQLGbt8bM

Finally, the examples given above all describe the scenario as it pertains to sounds of single frequencies,
i.e. tones. But most sources of sound, including speech, are not tonal and so limitations of employing ITD
and ILD are not as challenging as it may appear by the numbers, because sound sources have frequency
content that span a wide band, which introduces numerous means by ITD and ILD to detect source location
and enhance speech intelligibility.

8.2.2 Masking
It can be difficult to differentiate a tonal sound in the presence of another tone that occurs simultaneously.
This phenomenon is called masking and it is related to the way by which acoustic waves drive the basilar
membrane in the hearing sense, Figure 83. The basilar membrane is wrapped up within the cochlea and has
an end, the stapes, that is as the front of the cochlea at the oval window, and an apex that is at the back,
which is the helicotrema. The width of the basilar membrane at the stapes is narrower towards the stapes
than at the apex, and so higher frequencies more readily excite traveling waves at the front than the back of
the membrane. But waves have to travel along the entire membrane to reach the apex, where low
frequencies are perceived.

Related to this spatial gradient of activation of the basilar membrane for tone recognition, masking is the
phenomenon whereby portions of the membrane are excited concurrently by different tones and make it
more difficult to distinguish the presence of two sounds. The common masking effects are that low
frequency tones mask high frequency tones and two tones of similar frequencies will be heard as one tone
if one is at greater level. Nearness in frequency is generally more influential for masking phenomena. For
instance, a 550 [Hz] is more likely to be masked by a 500 [Hz] tone than a 50 [Hz] tone, even though the
50 [Hz] tone will excite more of the basilar membrane.

163
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Figure 82. Schematic and representative frequency response of basilar membrane [41]. Schematic of the spatial
vibration and transmission of waves along the basilar membrane as dependent on the acoustic pressure frequency
[42].

The left schematic of Figure 83 gives an example of this where the same parts of the membrane are excited
by multiple tones. The significance of the masking is associated with the overlapping region where the
basilar membrane is excited due to multiple tones. The masking phenomena apply likewise to bandpass
filtered noise that masks tones in accordance with the nearness of the bandpass center frequency respecting
the tone of interest.

8.2.3 The cocktail party effect


It is fitting to conclude this introductory course on a celebratory note and we will do so in discussing the
cocktail party effect. This is a phenomenon regarding the binaural human hearing sense that demonstrates
we are able to focus our auditory attention on a single stimulus, even amidst a noisy background "babble"
of conversation, music, and stochastic noise, and even when lacking visual cues for source localization.
The phenomenon is more often of interest when it comes to distinguishing speech signals from a
background of babble, such as one may be confronted with at a cocktail party. This was recognized as an
interesting feature of human hearing long before it was measured and characterized [40] [41] [42]. Much is
now know about the consequences of the cocktail party effect, including that it is truly a feature of binaural
hearing because a message cannot be similarly extracted if it is presented only to one ear (monaural
hearing), and that humans can utilize the effect up to about 7 pairs of extraneous talkers for 0 [dB] signal-
to-babble ratio while the signal-to-babble ratio can be -10 [dB] (more babble!) if there is only one pair of
extraneous talkers [42]. On the other hand, the origin of this phenomenon is unknown. It appears to truly
be psychoacoustic in the sense that it is largely a psychological phenomenon and cannot be characterized
according to physiological features or properties of human hearing. Thus, this is where our interest must
diminish in investigating the cocktail party effect since it begins to leave the realm of investigative science
and turns to a course in psychology!

164
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Table 24. Airborne sound transmission loss values [dB] for common building structures and materials, with indicated
representative thickness values and area densities. Values assumed to be reported as field incidence. From [13]

Panel construction Thickness [mm] Weight [kg/m2] 63 125 250 500 1000 2000 4000 8000

Panels of sheet materials

1.5 mm lead sheet 1.5 17 22 28 32 33 32 32 33 36

3 mm lead sheet 3 34 24 30 31 27 38 44 33 38

20 g aluminum sheet, stiffened 0.9 2.5 8 11 10 10 18 23 25 30

6 mm steel plate 6 50 – 27 35 41 39 39 46 –

22 g galvanized steel sheet 0.55 6 3 8 14 20 23 26 27 35

20 g galvanized steel sheet 0.9 7 3 8 14 20 26 32 38 45

18 g galvanized steel sheet 1.2 10 8 13 20 24 29 33 39 44

16 g galvanized steel sheet 1.6 13 9 14 21 27 32 37 43 42

18 g fluted steel panels stiffened at edges, joints 1.2 39 25 30 20 22 30 28 31 31

Corrugated asbestos sheet, stiffened and sealed 6 10 20 25 30 33 33 38 39 42

Chipboard sheets on wood framework 19 11 14 17 18 25 30 26 32 38

Fiberboard on wood framework 12 4 10 12 16 20 24 30 31 36

Plasterboard sheets on wood framework 9 7 9 15 20 24 29 32 35 38

2 layers 13 mm plaster board 26 22 – 24 29 31 32 30 35 –

Plywood sheets on wood framework 6 3.5 6 9 13 16 21 27 29 33

Plywood sheets on wood framework 12 7 – 10 15 17 19 20 26 –

Hardwood (mahogany) panels 50 25 15 19 23 25 30 37 42 46

Woodwork slabs, unplastered 25 19 0 0 2 6 6 8 8 10

Woodwork slabs, plastered (12 mm on each face) 50 75 18 23 27 30 32 36 39 43

Plywood 6 3.5 – 7 13 19 25 19 22 –

Plywood 9 5 – 17 15 20 24 28 27 –

Plywood 18 10 – 24 22 27 28 25 27 –

Lead vinyl curtains 3 7.3 – 22 23 25 31 35 42 –

Lead vinyl curtains 2 4.9 – 15 19 21 28 33 37 –

Panels of sandwich construction

Machine enclosure panels

16 g steel+damping 100 25 20 21 27 38 48 58 67 66

with 100 mm of glass-fiber, covered by 22 g perforated steel 100 – 25 27 31 41 51 60 65 66

As above, but 16 g steel replaced with 5 mm steel plate 100 50 31 34 35 44 54 63 62 68

1.5 mm lead between two sheets of 5 mm plywood 11.5 25 19 26 30 34 38 42 44 47

9 mm asbestos board between two sheets of 18 g steel 12 37 16 22 27 31 27 37 44 48

Compressed straw between two sheets of 3 mm hardboard 56 25 15 22 23 27 27 35 35 38

165
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

Single masonry walls

Single leaf brick, plastered on both sides 125 240 30 36 37 40 46 54 57 59

Single leaf brick, plastered on both sides 255 480 34 41 45 48 56 65 69 72

Single leaf brick, plastered on both sides 360 720 36 44 43 49 57 66 70 72

Solid breeze or clinker, plastered (12 mm both sides) 125 145 20 27 33 40 50 58 56 59

Solid breeze or clinker blocks, unplastered 75 85 12 17 18 20 24 30 38 41

Hollow cinder concrete blocks, painted 100 75 22 30 34 40 50 50 52 53

Hollow cinder concrete blocks, unpainted 100 75 22 27 32 32 40 41 45 48

Thermalite blocks 100 125 20 27 31 39 45 53 38 62

Glass bricks 200 510 25 30 35 40 49 49 43 45

Plain brick 100 200 – 30 36 37 37 37 43 –

Aerated concrete blocks 100 50 – 34 35 30 37 45 50 –

Aerated concrete blocks 150 75 – 31 35 37 44 50 55 –

Stud partitions

50 mm×100 mm studs, 12 mm insulating board both sides 125 19 12 16 22 28 38 50 52 55

50 mm×100 mm studs, 9 mm plasterboard and 12 mm plaster coat both sides 142 60 20 25 28 34 47 39 50 56

Cavity, 45 mm wide, filled with fiberglass 75 30 – 27 39 46 43 47 52 –

Empty cavity, 86 mm wide 117 26 – 19 30 39 44 40 43 –

Cavity, 86 mm wide, filled with fiberglass 117 30 – 28 41 48 49 47 52 –

gypsum wall with 88 mm sound absorbing material 240 26 – 42 56 68 74 70 73 –

As above but staggered 4-inch studs 240 30 – 35 50 55 62 62 68 –

Single glazed windows

Single glass in heavy frame 6 15 17 11 24 28 32 27 35 39

Single glass in heavy frame 8 20 18 18 25 31 32 28 36 39

Single glass in heavy frame 9 22.5 18 22 26 31 30 32 39 43

Single glass in heavy frame 16 40 20 25 28 33 30 38 45 48

Single glass in heavy frame 25 62.5 25 27 31 30 33 43 48 53

Laminated glass 13 32 – 23 31 38 40 47 52 57

Doubled glazed windows

2.44 mm panes, 7 mm cavity 12 15 15 22 16 20 29 31 27 30

9 mm panes in separate frames, 50 mm cavity 62 34 18 25 29 34 41 45 53 50

6 mm glass panes in separate frames, 100 mm cavity 112 34 20 28 30 38 45 45 53 50

6 mm glass panes in separate frames, 188 mm cavity 200 34 25 30 35 41 48 50 56 56

3 mm plate glass, 55 mm cavity 63 25 – 13 25 35 44 49 43 –

6 mm plate glass, 55 mm cavity 70 35 – 27 32 36 43 38 51 –

6 mm and 5 mm glass, 100 mm cavity 112 34 – 27 37 45 56 56 60 –

166
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

6 mm and 8 mm glass, 100 mm cavity 115 40 – 35 47 53 55 50 55 –

Doors

Flush panel, hollow core, normal cracks as usually hung 43 9 1 12 13 14 16 18 24 26

Solid hardwood, normal cracks as usually hung 43 28 13 17 21 26 29 31 34 32

Plastic laminated flush wood door 44 20 – 14 18 17 23 18 19 –

Veneered surface, flush wood door 44 25 – 22 26 29 26 26 32 –

Hardwood door 54 20 – 20 25 22 27 31 35 –

Hardwood door 66 44 – 24 26 33 38 41 46 –

Floors

T&G boards, joints scaled 21 13 17 21 18 22 24 30 33 63

As above, with boards “floating” on glass-wool mat 240 35 20 25 33 38 45 56 61 64

Concrete, reinforced 100 230 32 37 36 45 52 59 62 63

Concrete, reinforced 200 460 36 42 41 50 57 60 65 70

167
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

References

[1] L.E. Kinsler, A.R. Frey, A.B. Coppens, J.V. Sanders, Fundamentals of Acoustics (John Wiley and
Sons, New York, 2000).

[2] N. Rott, Thermoacoustics, Advances in Applied Mechanics 20, 135-175 (1980).

[3] T. Colonius, S.K. Lele, Computational aeroacoustics: progress on nonlinear problems of sound
generation, Progress in Aerospace Sciences 40, 345-416 (2004).

[4] H. Bruus, Acoustofluidics 1: governing equations in microfluidics, Lab on a Chip 11, 3742-3751
(2011).

[5] B.W. Drinkwater, Dynamic-field devices for the ultrasonic manipulation of microparticles, Lab on a
Chip , doi: 10.1039/c6lc00502k (2016).

[6] W.H. Sumby, I. Pollack, Visual contribution to speech intelligibility in noise, The Journal of the
Acoustical Society of America 26, 212-215 (1954).

[7] J. Blauert, Communication Acoustics (Springer, Berlin, 2005).

[8] F. Fahy, Foundations of Engineering Acoustics (Academic Press, San Diego, California, 2001).

[9] D.T. Blackstock, Fundamentals of Physical Acoustics (Wiley, New York, 2000).

[10] J.O. Pickles, An Introduction to the Physiology of Hearing (Emerald Group Publishing Limited,
Bingley, 2012).

[11] Blausen.com staff, Blausen gallery 2014, Wikiversity Journal of Medicine ,


doi:10.15347/wjm/2014.010 (2014).

[12] McGraw-Hill Dictionary of Scientific and Technical Terms (New York, New York, 2002).

[13] D.A. Bies, C.H. Hansen, Engineering Noise Control: Theory and Practice (Spon Press, London,
2006).

[14] K.D. Kryter, The Effects of Noise on Man (Academic Press, New York, 1970).

[15] J. Singer, N. Lerner, C. Baldwin, E. Traube, 2015 Auditory alerts in vehicles: effects of alert
characteristics and ambient noise conditions on perceived meaning and detectability. In Proceedings
of the 24th International Technical Conference on Enhanced Safety of Vehicles (ESV); Gothenburg,
Sweden. 15-0455.

168
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

[16] B.W. Lawton. Damage to human hearing by airborne sound of very high frequency or ultrasonic
frequency. Contrast Research Report. Southampton, UK: Institute of Sound and Vibration Research;
2001. Report No.: 343/2001.

[17] L.L. Beranek, Noise and Vibration Control (Institute of Noise Control Engineering, Washington,
1988).

[18] M.P. Norton, Fundamentals of Noise and Vibration Analysis for Engineers (Cambridge University
Press, Cambridge, 1989).

[19] F. Fahy, P. Gardonio, Sound and Structural Vibration: Radiation, Transmission and Response
(Academic Press, Oxford, 1987).

[20] A.D. Pierce, Acoustics: an Introduction to its Physical Principles and Applications (McGraw Hill,
New York, 1981).

[21] M.C. Junger, D. Feit, Sound, Structures, and Their Interaction (MIT Press, Cambridge,
Massachusetts, 1972).

[22] S. Temkin, Elements of Acoustics (Wiley, New York, 1981).

[23] K.U. Ingard, Notes on Acoustics (Infinity Science Press, Hingham, Massachusetts, 2008).

[24] P.M. Morse, K.U. Ingard, Theoretical Acoustics (McGraw-Hill, New York, 1968).

[25] K.F. Graff, Wave Motion in Elastic Solids (Dover Publications, New York, 1991).

[26] J.W. Strutt, The Theory of Sound (MacMillian and Co., London, 1894).

[27] D.A. Bies, Uses of anechoic and reverberant rooms for the investigation of noise sources, Noise
Control Engineering 7, 154-163 (1976).

[28] Z. Maekawa, Noise reduction by screens, Applied Acoustics 1, 157-173 (1968).

[29] Microphone handbook, Volume 1. Technical Documentation. Brüel & Kjær; 1996. Report No.: BE
1447 –11.

[30] Measurement microphones. Booklet. Brüel & Kjær; 1994. Report No.: BR 0567-12.

[31] Measuring sound. Booklet. Brüel & Kjær; 1984. Report No.: BR 0047-13.

[32] A.V. Oppenheim, R.W. Schafer, J.R. Buck, Discrete-Time Signal Processing (Prentice-Hall, Upper
Saddle River, New Jersey, 1999).

[33] M.R. Haberman, M.D. Guild, Acoustic metamaterials, Physics Today 69, 42-48 (2016).

[34] M. Ermann, Architectural Acoustics Illustrated (Wiley, Hoboken, New Jersey, 2015).

169
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

[35] M. Harrison, Vehicle refinement: controlling noise and vibration in road vehicles (SAE International,
Warrendale, Pennsylvania, 2004).

[36] T.J. Schultz, Synthesis of social surveys on noise annoyance, The Journal of the Acoustical Society
of America 64, 377-405 (1978).

[37] D.M. Howard, J. Angus, Acoustics and Psychoacoustics (Focal Press, Oxford, 2006).

[38] J.H. McClellan, R.W. Schafer, M.A. Yoder, DSP First (Prentice-Hall, Upper Saddle River, NJ, 1998).

[39] F. Rumsey, Spatial Audio (Focal Press, Woburn, MA, 2001).

[40] P.B. Weston, J.D. Miller, I.J. Hirsh, Release from masking for speech, The Journal of the Acoustical
Society of America 42, 1053-1054 (1967).

[41] J.C.R. Licklider, The influence of interaural phase relations upon the masking of speech by white
noise, The Journal of the Acoustical Society of America 20, 150-159 (1948).

[42] I. Pollack, J.M. Pickett, Stereophonic listening and speech intelligibility against voice babble, The
Journal of the Acoustical Society of America 30, 131-133 (1958).

[43] E. Parizet, E. Guyader, V. Nosulenko, Analysis of car door closing sound quality, Applied Acoustics
69, 12-22 (2008).

[44] P. Susini, S. McAdams, S. Winsberg, I. Perry, S. Vieillard, X. Rodet, Characterizing the sound quality
of air-conditioning noise, Applied Acoustics 65, 763-790 (2004).

[45] Q. Plummer. Tech Times. [Online].; May 1, 2016. Available from:


http://www.techtimes.com/articles/16661/20140928/2015-ford-mustang-ecoboost-fakes-engine-
noise-via-stereo-hello-active-noise-control.htm.

[46] A. Sitbon. Bimmer File. [Online].; May 1, 2016. Available from:


http://www.bimmerfile.com/2015/02/04/truth-bmws-active-sound/.

[47] K.C. Colwell. Car and Driver. [Online].; May 1, 2016. Available from:
http://www.caranddriver.com/features/faking-it-engine-sound-enhancement-explained-tech-dept.

[48] P.W. Gillett. Head mounted microphone arrays. PhD. dissertation. Blacksburg, VA: Virginia Tech;
2009.

[49] H. Lord, W. Gatley, H. Evensen, Noise Control for Engineers (Robert Krieger Publishing Co.,
Malabar, FL, 1987).

[50] W. Yost, Fundamentals of Hearing: an Introduction (Academic Press, Oxford, 1993).

170
RL Harne ME 5241, Eng. Acoust. 2018 The Ohio State University

171

You might also like