Selective attention for audiovisual integration of speech

dc.contributor.authorBoutros, Sylvain John
dc.contributor.authorUniversity of Lethbridge. Faculty of Arts and Science
dc.contributor.supervisorTata, Matthew S.
dc.date.accessioned2023-03-02T19:09:09Z
dc.date.available2023-03-02T19:09:09Z
dc.date.issued2022
dc.degree.levelMastersen_US
dc.description.abstractThe perceptual brain decomposes the audiovisual world into a set of audiovisual features such as color, shape, and pitch. An important question in perception science is whether selective attention is required to bind audiovisual features back into unified perceptual objects. In visual displays, targets defined as conjunctions of bound features typically cannot be searched for in parallel across a complex scene. Instead, attention must be scanned through a visual scene to find such targets. This means that conjunction of features requires selective attention. Only a few prior studies have investigated this process in the crossmodal audiovisual case. These prior studies left some ambiguity as to how temporal dynamics interact with selective attention, and none have used dynamic audiovisual speech as stimuli. In two experiments, this thesis explored whether the crossmodal conjunction of audio and visual speech features requires selective attention. In Chapter 2, we presented observers with displays of multiple visual faces and one audible voice. The task was to determine whether one of the faces was saying a sentence that matched the voice. In Chapter 3, we presented observers with displays of multiple audible voices and one visual face. Similarly, the task was to determine whether one of the voices was saying the sentence that matched the lip movements of the visual face. We compared response times to complete the task as the number of distracting items was increased. We found that it took longer to complete tasks with increasing number of distractors. This means that audiovisual speech targets cannot be registered in parallel across the scene and strongly suggests that selective attention is required to bind speech features across modalities.en_US
dc.identifier.urihttps://hdl.handle.net/10133/6445
dc.language.isoen_USen_US
dc.proquest.subject0317en_US
dc.proquest.subject0633en_US
dc.proquest.subject0623en_US
dc.proquestyesYesen_US
dc.publisherLethbridge, Alta. : University of Lethbridge, Dept. of Neuroscience
dc.publisher.departmentDepartment of Neuroscienceen_US
dc.publisher.facultyArts and Scienceen_US
dc.relation.ispartofseriesThesis (University of Lethbridge. Faculty of Arts and Science)
dc.subjectneuroscience
dc.subjectaudiovisual
dc.subjectpsychology
dc.subjectcognitive science
dc.subjectauditory
dc.subjectvisual
dc.subjectvisual perception
dc.subjectauditory perception
dc.subjectspeech integration
dc.subjectNeurosciences
dc.subjectPsychology
dc.subjectCognitive neuroscience
dc.subjectAuditory perception
dc.subjectVisual perception
dc.subjectSpeech perception
dc.subjectSelectivity (Psychology)
dc.subjectHuman information processing
dc.subjectAttention
dc.subjectDistraction (Psychology)
dc.subjectDissertations
dc.subjectAcademic
dc.titleSelective attention for audiovisual integration of speechen_US
dc.typeThesisen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
BOUTROS_SYLVAIN_MSC_2022.pdf
Size:
1.36 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
3.25 KB
Format:
Item-specific license agreed upon to submission
Description: