chapter 12 (exam 4) sensation and perception vocab

auditory space

Perception of where sounds are located in space. Auditory space extends around a listener's head in all directions, existing wherever there is a sound.

auditory localization

the perception of the location of a sound source

the problem the auditory system faces in determining these locations by comparing the information for location for vision and hearing.

consider the tweeting bird and the meowing cat in Figure 12.1. Visual information for the relative locations of the bird and the cat is contained in the images of the bird and the cat on the surface of the retina. The ear, however, is different. The bird'

location cues

In hearing, characteristics of the sound reaching the listener that provide information regarding the location of a sound source.

two kinds of location cues

binaural cues, which depend on both ears, and monaural cues, which depend on just one ear.

three dimensions people can locate the position of a sound

azimuth, elevation, distance

azimuth

In hearing, specifies locations that vary from left to right relative to the listener.

elevation

In hearing, sound locations that are up and down relative to the listener.

distance

How far a stimulus is from the observer. In hearing, the distance coordinate specifies how far the sound source is from the listener.

binaural cues

Sound localization cue that involves both ears. Interaural time difference and interaural level difference are the primary binaural cues.

interaural level difference (ILD)

type of binaural cue. The difference in the sound pressure (or level) between the left and right ears. This difference creates an acoustic shadow for the far ear. The ILD provides a cue for sound localization for high-frequency sounds.

acoustic shadow

The shadow created by the head that decreases the level of high-frequency sounds on the opposite side of the head. The acoustic shadow is the basis of the localization cue of interaural level difference.

Why interaural level difference (ILD) occurs for high frequencies but not for low frequencies.

(a) Person listening to a high-frequency sound; (b) person listening to a low-frequency sound. (c) When the spacing between waves is smaller than the size of the object, illustrated here by water ripples that are smaller than the boat, the waves are stopp

an object has a large effect on the wave if it is larger than the distance between the waves (as occurs when short high-frequency sound waves hit the head),

but has a small effect if it is smaller than the distance between the waves (as occurs for longer low-frequency sound waves). For this reason, the ILD is an effective cue for location only for high-frequency sounds.

interaul time difference (ITD)

second binaural cue. When a sound is positioned closer to one ear than to the other, the sound reaches the close ear slightly before reaching the far ear, so there is a difference in the time of arrival at the two ears. The ITD provides a cue for sound lo

Because the ITD becomes larger as sound sources are located more to the side, the magnitude of the ITD can be used as a cue to determine a sound's location

Behavioral experiments show that ITD is most effective for determining the locations of low-frequency sounds (Wightman & Kistler, 1992). Thus ITD (which works for low frequencies) and ILD (which works for high frequencies) cover the frequency range for he

Because the time and level differences can be the same at a number of different elevations

they cannot reliably indicate the elevation of the sound source

cone of confusion

A surface in the shape of a cone that extends out from the ear. Sounds originating from different locations on this surface all have the same interaural level difference and interaural time difference, so location information provided by these cues is amb

there are many locations in space

where two sounds coudl result in the same ILD and ITD

monoaural cue

sound localization cue that involves one ear

spectraul cue

the primary monaural cue for localization. In hearing, the distribution of frequencies reaching the ear that are associated with specific locations of a sound. The differences in frequencies are caused by interaction of sound with the listener's head and

paul hofman and coworkers (1998)

The idea that localization can be affected by using a mold to change the inside contours of the pinnae was also demonstrated by them.They determined how localization changes when the mold is worn for several weeks, and then what happens when the mold is r

after measuring inital performance, hofman

fitted his listeners with molds that altered the shape of the pinnae and therefore changed the spectral cue. Figure 12.7b shows that localization performance is poor for the elevation coordinate immediately after the mold is inserted, but locations can st

What do you think happened when the molds were removed in hofmans experiment?

localization remained excellent immediately after removal of the ear molds. Apparently, training with the molds created a new set of correlations between spectral cues and location, but the old correlation was still there as well. One way this could occur

the importance of the pinnae for determinngi elevation

as been demonstrated by showing that smoothing out the nooks and crannies of the pinnae with molding compound makes it difficult to locate sounds along the elevation coordinate

ilds and its work for judign azimiuth location,

with ILD best for high frequencies and ITD for low frequencies. Spectral cues work best for judging elevation, especially at higher frequencies. These cues work together to help us locate sounds. In real-world listening, we also move our heads, which prov

vision also plays a role in sound localization,

as when you hear talking and see a person making gestures and lip movements that match what you are hearing. Thus, the richness of the environment and our ability to actively search for information help us zero in on a sound's location.

jeffress model

The neural mechanism of auditory localization that proposes that neurons are wired to each receive signals from the two ears, so that different neurons fire to different interaural time differences (ITD)

how jeffrees model works

If the sound source is directly in front of the listener, the sound reaches the left and right ears simultaneously, and signals from the left and right ears start out together, as shown in Figure 12.8a. As each signal travels along its axon, it stimulates

concidence detectors

Neurons in the Jeffress neural coincidence model, which was proposed to explain how neural firing can provide information regarding the location of a sound source. A neural coincidence detector fires when signals from the left and right ears reach the neu

If the sound comes from the right, as in Figure 12.8c, the sound reaches the right ear first, so the ITD is not zero.

The signal from the right ear has a head start, as shown in Figure 12.8c, and both signals reach neuron 3 simultaneously (Figure 12.8d), so this neuron fires. This neuron, therefore, detects ITDs that occur when the sound is coming from a specific locatio

itd detectors

Interaural time difference detector. Neurons in the Jeffress neural coincidence model that fire when signals reach them from the left and right ears. Each ITD detector is tuned to respond to a specific time delay between the two signals, and so provides i

the jefress model therefore proposes

a circuit that contains a series of ITD detectors, each tuned to respond best to a specific ITD. According to this idea, the ITD will be indicated by which ITD neuron is firing. This has been called a "place code" because ITD is indicated by the place (wh

itd tuning curves

a wy to describe the properties of itd neurons is to measure this. A plot of the neuron's firing rate against the ITD (interaural time difference).

ITD tuning curves for six neurons that each respond to a narrow range of ITDs

The neurons on the left respond when sound reaches the left ear first. The ones on the right respond when sound reaches the right ear first. Neurons such as these have been recorded from the barn owl and other animals.

The results of research in which ITD tuning curves are recorded from mammals may appear, at first glance, to support the Jeffress model

However, when we plot the owl curve on the same graph (dashed line), we can see that the gerbil curve is much broader than the owl curve. In fact, the gerbil curve is so broad that it extends far outside the range of ITDs that are actually involved in sou

Because of the broadness of the ITD curves in mammals, it has been proposed that coding for localization is based on broadly tuned neurons

According to this idea, there are broadly tuned neurons in the right hemisphere that respond when sound is coming from the left and broadly tuned neurons in the left hemisphere that respond when sound is coming from the right. The location of a sound is i

we can conclude that it is based on sharply tuned neurons for birds and broadly tuned neurons for mammals.

The code for birds is a place code because the ITD is indicated by firing of neurons at a specific place in the nervous system. The code for mammals is a population code because the ITD is determined by the firing of many broadly tuned neurons working tog

the neurl basis of binaural localizations begins

along the pathway from the cochlea to the brain, in the superior olivary nucleus (remember the acronym SONIC MG, that stands for superior olivary nucleus, inferior colliculus, and medial geniculate; see Figure 11.30), which is the first place that receive

dewey neff and coworkers (1956) pioneering study

placed cats about 8 feet away from two food boxes�one about 3 feet to the left, and one about 3 feet to the right. The cats were rewarded with food if they approached the sound of a buzzer located behind one of the boxes. Once the cats learned this locali

fernando nodal and coworkers (2010) later studied more than 50 yers after neff'research

showed that lesioning A1 in ferrets decreased, but did not totally eliminate, the ferrets' ability to localize sounds. Another demonstration that A1 is involved in localization was provided by Shveta Malhotra and Stephen Lomber (2007), who showed that dea

fernando nodal and coworkers results

1. reference: neff et al.
what done: cat auditory areas destroyed
result: localization ability lost
2. reference: nodal et al (2010)
what done: ferett a1 destroyed
result: localization ability decreased
3. reference: malhotra and lomber
what done: cat a1

posterior belt area

Posterior (toward the back of the brain) area of the belt area, which is an area in the temporal lobe involved in auditory processing. indiciated by P

anterior belt area

The front of the posterior belt in the temporal lobe, which is involved in perceiving sound. indiciated by A

gregg recanzone (2000)

compared the spatial tuning of neurons in A1 and neurons in the posterior belt. He did this by recording from neurons in the monkey and determining how a neuron responded when a sound source was moved to different locations. He found that neurons in A1 re

stephen lomber and sheveta malhotra (2008) additional posterior belt with sound localization evidence

who showed that temporarily deactivating a cat's posterior auditory areas by cooling the cortex disrupts the cat's ability to localize sounds (Figure 12.13a). Cooling the posterior areas does not, however, affect the cat's ability to tell the difference b

evidence that posterior belt is invovled in localization

1. reference: recanzone (2000)
what done: tuning curves of a1 and posterior belt neurons measured
result: posterior belt tuning curves are narrower than a1 tuning curves
2. refernece: lomber an dmalhotra (2008)
what done: cat posterior belt cooled
result:

josef rauschecker and bio tian (2000)

found that while monkey A1 neurons are activated by simple sounds such as pure tones, neurons in the anterior area of the belt respond to more complex sounds, such as monkey calls�vocalizations recorded from monkeys in the jungle. Thus anterior belt neuro

lomber and malhotra (2008)

used the cooling technique to show that deactivating the anterior belt disrupts the cat's ability to tell the difference between two timing patterns of sound (Figure 12.13b) but does not affect the cat's ability to localize sounds.

evidence that the anterior belt is invovled in perceiving sound

1. reference: raushceckr and tian (200)
what done: best stimuli for a1 and anterior belt neurons determined
result: anterior belt neurons respond best to mroe complex sounds
2. reference: lomber an dmalhotra
what done: cat anterior belt cooled
result: abi

what auditory pathway

Pathway that extends from the anterior belt to the front of the temporal lobe and then to the frontal cortex. This pathway is responsible for perceiving complex sounds and patterns of sounds.

where auditory pathway

Pathway that extends from the posterior belt to the parietal lobe and then to the frontal cortex. This pathway is responsible for localizing sounds.

two pathways

The what pathway is associated with perceiving sounds and the where pathway with locating sounds.

direct sound

sound that is transmitted direclty from a sound source to the ears

indirect sound

sound that reaches a listener's ears after being reflected from a surface such as a room's walls

The fact that sound can reach our ears directly from where the sound is originating and indirectly from other locations creates a potential problem

because even though the sound originates in one place, the sound reaches the listener from many directions and at slightly different times.

Research on sound reflections and the perception of location has usually simplified the problem by having people listen to sounds coming from two loudspeakers separated in space,

The speaker on the left is the lead speaker, and the one on the right is the lag speaker. If a sound is presented in the lead speaker followed by a long delay (tenths of a second), and then a sound is presented in the lag speaker, listeners typically hear

precedence effect

When two identical or very similar sounds reach a listener's ears separated by a time interval of less than about 50 to 100 ms, the listener hears the first sound that reaches his or her ears.

the precedent effect

governs most of our indoor listening experience

architectural acoustics

The study of how sounds are reflected in rooms. An important concern of architectural acoustics is how these reflected sounds change the quality of the sounds we hear.

The major factors affecting indirect sound are

the size of the room and the amount of sound absorbed by the walls, ceiling, and floor

anotehr factor affecting indirect sound is the shape of the room

this determines how soudn hits surfaces and the direiton in which it is reflected

reverberation time

The time it takes for a sound produced in an enclosed space to decrease to 1/1,000th of its original pressure.

Because of the relationship between reverberation time and perception,

acoustical engineers have tried to design concert halls in which the reverberation time matches the reverberation time of halls that are renowned for their good acoustics, such as Symphony Hall in Boston and the Concertgebouw in Amsterdam, which have reve

however an ideal reverberaton time does not always predict good acoustics

This is illustrated by the problems associated with the design of New York's Philharmonic Hall. When it opened in 1962, Philharmonic Hall had a reverberation time close to the ideal of 2.0 seconds. Even so, the hall was criticized for sounding as though i

leo bernake (1996)

showed that the following physical measures are associated with how music is perceived in concert halls:
Intimacy time: The time between when sound arrives directly from the stage and when the first reflection arrives. This is related to reverberation but

To determine the optimal values for these physical measures, acoustical engineers measured them in 20 opera houses and 25 symphony halls in 14 countries.

By comparing their measurements with ratings of the halls by conductors and music critics, they confirmed that the best concert halls had reverberation times of about 2 seconds, but they found that 1.5 seconds was better for opera houses, with the shorter

Thus, a hall with good acoustics when full could echo when there are too many empty seats.

To deal with this problem, the seat cushions were designed to have the same absorption properties as an "average" person. This means that the hall has the same acoustics when empty or full. This is a great advantage to musicians, who usually rehearse in a

auditory scene

The sound environment, which includes the locations and qualities of individual sound sources.

auditory scene analysis

The process by which the sound stimuli produced by different sources in an auditory scene become perceptually organized into sounds at different locations and into separated streams of sound.

auditory scene analysis poses a difficult problem because

the sounds from different sources are combined into a single acoustic signal, so it is difficult to tell which part of the signal is created by which source just by looking at the waveform of the sound stimulus.

how does the audiotry system separate each of the frequencies in the 'combined' sound signal into information that enables us to hear the guisar the vocalzist and the keyboard as separate sound sources?

A number of principles help us perceptually organize elements of an auditory scene, and these principles are based on how sounds are usually organized in the environment. For example, if two sounds start at different times, it is likely that they come fro

one way to analyze an auditory scene into its separate compoents

would be to use information about wher each source is located
According to this idea, you can separate the sound of the vocalist from the sound of the guitar based on localization cues such as the ILD and ITD. Thus, when two sounds are separated in space,

if two sounds start at slightly different times, it is likely that they came from different sources. (onset time)

This occurs often in the environment, because sounds from different sources rarely start at exactly the same time. When sound components do start together, it is likely that they are being created by the same source

This type of coding resembles the population coding we described in Chapter 3, in which information in the nervous system is based on the pattern of neural responding. **

This is, in fact, how the visual system signals different wavelengths of light, as we saw when we discussed color vision in Chapter 9, in which wavelengths are signaled by the pattern of response of three different cone pigments (Figure 9.12).

If what and where pathways sound familiar, it is because we described what and where pathways for vision

Thus, the idea of pathways serving what and where functions is a general principle that occurs for both hearing and vision. It is also important to note that although the research we have described is on ferrets, cats, and monkeys, evidence for auditory w

sounds that have the same timbre or pitch range

are often produced by the same source. ex flute doesnt suddnely start soundling like timbre of a trombone

composer baroque period (1600-1750)

knew that when a single instrument plays notes that alternate rapidly between high and low tones, the listener perceives two separate melodies, with the high notes perceived as a single melodic line, and the low notes as another. An excerpt from a composi

auditory stream segregarion

The effect that occurs when a series of sounds that differ in pitch or timbre are played so that the tones become perceptually separated into simultaneously occurring independent streams of sound.

albert bregman and jeffrey campbell (1971)

demonstrated auditory stream segregation based on pitch by alternating high and low tones, as shown in the sequence in Figure 12.19. When the high-pitched tones were slowly alternated with the low-pitched tones, as in Figure 12.19a, the tones were heard a

bregman and campbell demonstrated showed

that stream segregation depends not only on pitch but also on the rate at which tones are presented. Thus, returning to the Bach composition, the high and low streams are perceived to be separate if they are played rapidly, but not if they are played slow

However, when the frequencies of the two stimuli become similar, something interesting happens

Grouping by similarity of pitch occurs, and perception changes to a back-and-forth "galloping" between the tones of the two streams. Then, as the scale continues upward so the frequencies become more separated, the two sequences are again perceived as sep

scale illusion/melodic channeling

An illusion that occurs when successive notes of a scale are presented alternately to the left and right ears. Even though each ear receives notes that jump up and down in frequency, smoothly ascending or descending scales are heard in each ear. Also call

diana deutsch (1975,1996)

demonstrated this effect by presenting two sequences of notes simultaneously through earphones, one to the right ear and one to the left (Figure 12.21a). Notice that the notes presented to each ear jump up and down and do not create a scale. However, Deut

the scale illusion higlights an important property of perceputal grouping

Most of the time, the principles of auditory grouping help us accurately interpret what is happening in the environment. It is most effective to perceive similar sounds as coming from the same source because this is what usually happens in the environment

sounds that stay constant or that change smoothly

are often produced by the same source
sound sitmuli wht the same frequency or smoothly changing frequencies are perceived as contiuous even when they are interuppted by another stimulus

richard warren and coworkers (1972) demosntrate dauditory continuity

by presenting bursts of tone interrupted by gaps of silence (Figure 12.22a). Listeners perceived these tones as stopping during the silence. But when Warren filled in the gaps with noise (Figure 12.22b), listeners perceived the tone as continuing behind t

the effect of past experience on the perceputal goruping of auditory stimuli can be demonstrated by

presenting the melody of a familiar song, as in Figure 12.23a. These are the notes for the song "Three Blind Mice," but with the notes jumping from one octave to another. When people first hear these notes, they find it difficult to identify the song. But

melody schema

example of three blidn mice. A representation of a familiar melody that is stored in a person's memory. Existence of a melody schema makes it more likely that the tones associated with a melody will be perceptually grouped.

each of the principles of auditory groupoing that we have described

provides information about the number and identity of sources in the auditory environment. But each principle alone is not foolproof, and basing our perceptions on just one principle can lead to error�as in the case of the scale illusion, which is purpose

music

sound organized in a way that in traditional western music, creates a melody

what does it mean to say that music is organized?

Music has been described as "organized sound" (Goldman, 1961), which may be true but is perhaps too general for a definition; for example, we might consider the sound of a lawn mower to be "organized" but most people would not consider it to be music. But

one way to appreciate musical organization

is by looking at music notation on a page (Figures 12.21, 12.23). Notes are organized vertically so they are high or low on the musical staff. They are organized horizontally as the notes follow one another in time.

Describing organization in terms of how notes are represented by musical notation is a description of physical organization

But we are interested in perceptual organization: how music is perceived. One approach, which focuses on sequences of notes, considers how we perceive melody and harmony and poses questions about organization such as: How are notes perceived as belonging

melody

The experience of a sequence of pitches as belonging together. Usually refers to the way notes follow one another in a song or musical composition.

When you think of the way notes follow one after another in a song or musical composition, you are thinking about its melody.

So melodies are more than just sequences of notes�they are sequences of notes that belong together and sound musica

phrases

In music, short segments of melodies, similar to phrases in language. ex twinkle twinkle little star how i wonder what you are

When people are asked to listen to melodies and indicate the end of one unit and the beginning of the next, they are able to segment the melodies into phrases

The most powerful cue for the perception of phase boundaries is pauses, with longer intervals separating one phrase from another

semitones

The smallest interval in Western music�roughly the difference between two notes in a musical scale, such as between C and C#. There are 12 semitones in an octave.

david huron (2006)

measured intervals between notes in 4,600 folk songs and found that the average interval within phrases was 2.0 semitones, whereas the average interval between the end of one phrase and the beginning of the next was 2.9 semitones. There is also evidence t

Returning to your imagined song, focus on the progression of notes. Do they jump wildly from high to low pitches and back again, with large and small gaps between them, or are they closely spaced and seem to follow one from the other?

The grouping of notes, first into phrases and then into longer sequences, creates melodies (Deutsch, 2013a). Creating a melody, whether it is "Twinkle, Twinkle, Little Star" or the opening of Beethoven's Fifth Symphony ("Da Da Da Dah..."), involves arrang

auditory stream integration

to indicate that the emphasis is on integrating notes within a single stre

one characteristics that favors grouping notes in western music is the interval between notes

Small intervals are common in musical sequences, in accordance with the Gestalt principle of proximity we described in Chapter 5, Similarity, which states that elements near each other tend to be perceived as grouped together (Bharucha & Krumhansl, 1983;

gap fill

In music, when after a large jump from one note to another, the next notes of the melody turn around, progressing in the opposite direction, to fill the gap.

Finally, certain trajectories of notes are commonly found in music.

Although there are fewer large changes in pitch than small changes, when large changes do occur they tend to go up (as in the first two notes of "Over the Rainbow"); small changes are likely to descend (

tonality

Organizing pitches around the note associated with a composition's key.

tonic

the key of a musical composition

return to the tonic

Occurs when a song begins with the tonic and ends with the tonic, where the tonic is the pitch associated with a composition's key.

carol krumhansl and edward kessler (1982)

who measured perceptions of tonality by presenting a scale that established a major or minor key and then followed the scale with a probe tone. Listeners assigned a rating of 1-7 to the probe to indicate how well it fit with the scale presented previously

krumhansl (1985) considered

the possibility that there is a relationship between the tonal hierarchy and the way notes are used in a melody by referring to statistical analyses of the frequency or duration of notes in compositions by composers such as Mozart, Schubert, and Mendelsso

musical syntax

rules tha tspecify how notes and chords are combined in music

syntax

in language, grammatical rules that specify correct sentence construction

event related potential (ERP)

The brain's response to a specific event, such as flashing an image or presenting a tone, as measured with small disc electrodes placed on a person's scalp.

eerp in language

Each electrode picks up signals from groups of neurons that fire together. A characteristic of the ERP that makes it useful for studying language (or music) is that it is a rapid response, occurring on a time scale of fractions of a second, as shown in th

The reason for introducing the idea that the P600 response indicates violations of syntax in language is that it has been proposed that

the ERP can be used in a similar way to determine how the brain responds to violations of musical syntax. One possible violation of musical syntax would be not returning to the tonic (see Tonality), because compositions usually return to the tonic and lis

another violation of music syntax occurs when an unlikely note or chord is

inserted that doesnt seem to 'fit' in the toonality fo the melody

aniruddh patel and coworkers (1998) used this type of fit violationof musical syntax to see if the p600 reposne occured in music

Their listeners heard a musical phrase like the one in Figure 12.28a, which contained a target chord, indicated by the arrow above the music. There were three different targets:
(1)
an "In key" chord, that fit the piece, shown on the musical staff;
(2)
a

three part experiment with aniruddh patel and coworkers

In the first part of the experiment, listeners judged the phrase as acceptable 80 percent of the time when it contained the in-key chord; 49 percent when it contained the nearby-key chord; and 28 percent when it contained the distant-key chord. One way of

These properties are not absolute�that is, they don't always occur in every phrase or melody

However, taken together they describe things that occur often in music, and thus are similar to the idea of regularities in the environment, which we introduced in our discussion of perceiving visual scenes in Chapter 5. When discussing visual scenes, we

commonly occuring properties of phrases and melodies

1. grouping: phrases
common characteristics: Large time intervals between end of one phrase and beginning of the next
Large pitch intervals between phrases, compared to within phrases
2. goruping: melodies
common characteristics: Melodies contain mostly s

listeners useing their knowledge of regularities

what we listen to contains no surpises
Just as our perception of visual scenes we are seeing for the first time is influenced by our past experiences in perceiving the environment, our perception of music we are hearing for first time can be influenced by

carlos silva pereira and coworkers (2011)

listeners rated specific pop/rock songs for familiaryt and likeability. When they listened to the songs while in an fMRI scanner, familiar songs caused activation in areas of the brain associated with emotion, both for songs they liked and songs they didn

The idea of a link between expectation and the response to music has been the basis of proposals that composers can purposely violate a listener's expectations to create emotion, tension, or a dramatic effect.

Leonard Meyer suggested this in his book Emotion and Meaning in Music (1956), in which he argued that the principal emotional component of music is created by the composer's choreographing of expectation. Later researchers have expanded on this idea

rhythm

In music, the series of changes across time (a mixture of shorter and longer notes) in a temporal pattern.

important in rthym

inter onset interval- the time between the onset of each note
which shows the first measures of "The Star Spangled Banner." Note onsets are indicated by the blue dots above the music, and the spaces between these dots define the song's rhythm. Because not

beat

In music, equally spaced intervals of time, which occurs even if there are no notes. When you tap your feet to music, you are tapping on the beat.

every culture has soem form of music with a beat

The beat can be up front and obvious, as in rock music, or more subtle, as in a quiet lullaby, but it is always there, creating a framework over which the melody creates its rhythmic pattern.

the beat can be likened to the pulse of music

both because of its regularity and because it can result in movement. The link between the beat and movement is expressed not only by behaviors such as tapping or swinging in time to the beat, but also by responses of motor areas in the brain

jessica grahn and james rowe (2009)

demonstrated a connection between the beat and a group of subcortical structures at the base of the brain called the basal ganglia, which had been associated with movement in previous research. Their subjects listened to "beat" and "non-beat" rhythmic pat

grahn rowe found that

the basal ganglia response was greater to the beat stimuli than to the non-beat stimuli. In addition, they calculated neural connectivity between subcortical structures, indicated by red in Figure 12.30, and cortical motor areas, indicated by blue, by det

in joyce chen and coworkers (2008) experimen they measured activity in the premotor cortex in three conditions:

(1)Tapping: Subjects tapped along with the sequence.
(2)
Listening with anticipation: Subjects listened to the sequence, but they knew they would be asked to tap to it later.
(3)
Passive listening: Subjects listened passively to a rhythmic sequence.

It isn't surprising that tapping caused the greatest response, because the premotor cortex is involved in creating movements.

But a response also occurred in the listening with anticipation condition (70 percent of the response to tapping) and in the passive listening condition (55 percent of the response to tapping), even though subjects were just listening, without moving. Thu

meter

In music, organization of beats into bars or measures, with the first beat in each bar often being accented. There are two basic kinds of meter in Western music: duple meter, in which accents are in multiples of two, such as 12 12 12 or 1234 1234 1234, li

Metrical structure is typically achieved when musicians accentuate some notes by using a stronger attack or by playing them louder or longer.

n doing this, musicians bring an expressiveness to music beyond what is heard by simply playing a string of notes. Thus, although the musical score may be the starting point for a performance, the musicians' interpretation of the score is what listeners h

how is metrical structure created by the mind?

Even though the ticking of a metronome creates a series of identical beats with regular spacing, it is possible to transform this series of beats into perceptual groups. We can, for example, imagine the beats of a metronome in duple meter (TICK-toc) or, w

john iversen and coworkers (2009)

studied the mental creation of meter using magnetoencephalography (MEG) to measure subjects' brain responses as they listened to rhythmic sequences. MEG measures brain responses by recording magnetic fields caused by brain activity. A feature of MEG is th

response of iversen and coworkers experiment

the MEG response depended on which beat was accented in the subject's mind. The blue curve indicates a large peak when the imagined beat was on the first note; the red curve indicates a later peak when the imagined beat was on the second note. Thus, our a

The idea that how we move may influence our perception of meter was first demonstrated by Jessica Phillips-Silver and Laurel Trainor (2005) in 7-month-old infants.

While these infants listened to a regular repeating ambiguous rhythm that had no accents, they were bounced up and down in the arms of the experimenter. These bounces occurred either in a duple pattern (a bounce on every second beat) or in a triple patter

phillips silver and trainor

found that infants listened to the pattern they had been bounced to for an average of 8 seconds but only listened to the other pattern for an average of 6 seconds. The infants therefore preferred the pattern they had been bounced to. To determine whether

second phillips silver and trainor (2007) exerpiment testing adults

In this case, the experimenter didn't hold the subject, but the experimenter and subject held hands and bounced together. After bouncing with the experimenter, the adults were tested by listening to duple and triple patterns and indicated which pattern th

phillips silver and trainor adult experiment results

concluded that the crucial factor that causes movement to influence the perception of metrical structure is stimulation of the vestibular system�the system that is responsible for balance and sensing the position of the body. To check this idea, Trainor a

vestibular system

The mechanism in the inner ear that is responsible for balance and sensing the position of the body.

perception of meter is influenced not only by moving

but by longer temr experience- the stress patterns of a person's language

john iversen and aniruddh patel (2008)

had subjects listen to a sequence of alternating long and short tones (Figure 12.32a) and then indicate whether they perceived the tones' grouping as long-short or short-long. The results indicated that English-speakers were more likely to perceive the gr

multisensory interactions

Use of a combination of senses. An example for vision and hearing is seeing a person's lips move while listening to the person speak.

ventriloquism effect/visual capture

When sound is heard coming from a seen location, even though it is actually originating somewhere else. ex movie threters before introducting of digitlal surround sound

two flash illusion

An illusion that occurs when one flash of light is presented, accompanied by two rapidly presented tones. Presentation of the two tones causes the observer to perceive two flashes of light.

Visual capture and the two-flash illusion, although both impressive examples of auditory-visual interaction, result in perceptions that don't match reality.

But sound and vision occur together all the time in real-life situations, and when they do, they often complement each other. For example, when you are having a conversation with someone, you are not only hearing what the person is saying, but you may als

speechreading

Process by which deaf people determine what people are saying by observing their lip and facial movements.

The idea that there are connections between vision and hearing is also

reflected in the interconnection of the different sensory areas of teh brain

ecolocation

Locating objects by sending out high-frequency pulses and sensing the echo created when these pulses are reflected from objects in the environment. Echolocation is used by bats and dolphins. ex blidn people make a cliching sound with their tongue and mout

love thaler and coworkers (2011)

had two expert echolocators create their clicking sounds as they stood near objects, and recorded the sounds and resulting echoes with small microphones placed in the ears. The question Thaler and coworkers were interested in is how these sounds would act

mor regev and coworkers (2013)

who recorded the fMRI response of subjects as they either listened to a 7-minute spoken story or read the words of the story presented at exactly the same rate that the words had been spoken. Not surprisingly, they found that listening to the story activa