Audiovisual Speech Recognition: Correspondence between Brain and Behavior

Perceptual processes mediating recognition, including the recognition of objects and spoken words, is inherently multisensory. This is true in spite of the fact that sensory inputs are segregated in early stages of neuro-sensory encoding. In face-to-face communication, for example, auditory informat...

Full description

Saved in:
Bibliographic Details
Main Author: Nicholas Altieri (auth)
Format: Electronic Book Chapter
Language:English
Published: Frontiers Media SA 2014
Series:Frontiers Research Topics
Subjects:
Online Access:DOAB: download the publication
DOAB: description of the publication
Tags: Add Tag
No Tags, Be the first to tag this record!

MARC

LEADER 00000naaaa2200000uu 4500
001 doab_20_500_12854_41553
005 20210211
003 oapen
006 m o d
007 cr|mn|---annan
008 20210211s2014 xx |||||o ||| 0|eng d
020 |a 978-2-88919-251-9 
020 |a 9782889192519 
040 |a oapen  |c oapen 
024 7 |a 10.3389/978-2-88919-251-9  |c doi 
041 0 |a eng 
042 |a dc 
072 7 |a JM  |2 bicssc 
100 1 |a Nicholas Altieri  |4 auth 
245 1 0 |a Audiovisual Speech Recognition: Correspondence between Brain and Behavior 
260 |b Frontiers Media SA  |c 2014 
300 |a 1 electronic resource (101 p.) 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
490 1 |a Frontiers Research Topics 
506 0 |a Open Access  |2 star  |f Unrestricted online access 
520 |a Perceptual processes mediating recognition, including the recognition of objects and spoken words, is inherently multisensory. This is true in spite of the fact that sensory inputs are segregated in early stages of neuro-sensory encoding. In face-to-face communication, for example, auditory information is processed in the cochlea, encoded in auditory sensory nerve, and processed in lower cortical areas. Eventually, these _sounds_ are processed in higher cortical pathways such as the auditory cortex where it is perceived as speech. Likewise, visual information obtained from observing a talker's articulators is encoded in lower visual pathways. Subsequently, this information undergoes processing in the visual cortex prior to the extraction of articulatory gestures in higher cortical areas associated with speech and language. As language perception unfolds, information garnered from visual articulators interacts with language processing in multiple brain regions. This occurs via visual projections to auditory, language, and multisensory brain regions. The association of auditory and visual speech signals makes the speech signal a highly _configural_ percept. An important direction for the field is thus to provide ways to measure the extent to which visual speech information influences auditory processing, and likewise, assess how the unisensory components of the signal combine to form a configural/integrated percept. Numerous behavioral measures such as accuracy (e.g., percent correct, susceptibility to the _McGurk Effect_) and reaction time (RT) have been employed to assess multisensory integration ability in speech perception. On the other hand, neural based measures such as fMRI, EEG and MEG have been employed to examine the locus and or time-course of integration. The purpose of this Research Topic is to find converging behavioral and neural based assessments of audiovisual integration in speech perception. A further aim is to investigate speech recognition ability in normal hearing, hearing-impaired, and aging populations. As such, the purpose is to obtain neural measures from EEG as well as fMRI that shed light on the neural bases of multisensory processes, while connecting them to model based measures of reaction time and accuracy in the behavioral domain. In doing so, we endeavor to gain a more thorough description of the neural bases and mechanisms underlying integration in higher order processes such as speech and language recognition. 
540 |a Creative Commons  |f https://creativecommons.org/licenses/by/4.0/  |2 cc  |4 https://creativecommons.org/licenses/by/4.0/ 
546 |a English 
650 7 |a Psychology  |2 bicssc 
653 |a Models of Integration 
653 |a Audiovisual speech and aging 
653 |a Integration Efficiency 
653 |a Multisensory language development 
653 |a Visual prediction 
653 |a Audiovisual integration 
653 |a imaging 
856 4 0 |a www.oapen.org  |u http://journal.frontiersin.org/researchtopic/1120/audiovisual-speech-recognition-correspondence-between-brain-and-behavior  |7 0  |z DOAB: download the publication 
856 4 0 |a www.oapen.org  |u https://directory.doabooks.org/handle/20.500.12854/41553  |7 0  |z DOAB: description of the publication