Show simple item record

dc.contributor.authorGrossman, Ruth B.
dc.contributor.authorSteinhart, Erin
dc.contributor.authorMitchell, Teresa V.
dc.contributor.authorMcIlvane, William J.
dc.date2022-08-11T08:08:32.000
dc.date.accessioned2022-08-23T15:58:06Z
dc.date.available2022-08-23T15:58:06Z
dc.date.issued2015-06-01
dc.date.submitted2015-05-18
dc.identifier.citationGrossman RB, Steinhart E, Mitchell T, McIlvane W. "Look who's talking!" Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism. Autism Res. 2015 Jun;8(3):307-16. doi: 10.1002/aur.1447. Epub 2015 Jan 24. PubMed PMID: 25620208; PubMed Central PMCID: PMC4474762. <a href="http://dx.doi.org/10.1002/aur.1447">Link to article on publisher's site</a>
dc.identifier.issn1939-3806 (Linking)
dc.identifier.doi10.1002/aur.1447
dc.identifier.pmid25620208
dc.identifier.urihttp://hdl.handle.net/20.500.14038/30356
dc.description.abstractConversation requires integration of information from faces and voices to fully understand the speaker's message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high-functioning autism (HFA) aged 8-19) a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non-face regions of the image. There were no between-group differences for eye-directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is maladaptive gaze behavior for this type of task.
dc.language.isoen_US
dc.relation<a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&list_uids=25620208&dopt=Abstract">Link to Article in PubMed</a>
dc.relation.urlhttp://dx.doi.org/10.1002/aur.1447
dc.subjectCognition and Perception
dc.subjectMental Disorders
dc.subjectPsychiatry and Psychology
dc.title"Look who's talking!" Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism
dc.typeJournal Article
dc.source.journaltitleAutism research : official journal of the International Society for Autism Research
dc.source.volume8
dc.source.issue3
dc.identifier.legacycoverpagehttps://escholarship.umassmed.edu/faculty_pubs/626
dc.identifier.contextkey7111917
html.description.abstract<p>Conversation requires integration of information from faces and voices to fully understand the speaker's message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high-functioning autism (HFA) aged 8-19) a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non-face regions of the image. There were no between-group differences for eye-directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is maladaptive gaze behavior for this type of task.</p>
dc.identifier.submissionpathfaculty_pubs/626
dc.contributor.departmentIntellectual and Developmental Disabilities Research Center
dc.contributor.departmentShriver Center
dc.source.pages307-16


This item appears in the following Collection(s)

Show simple item record