Temporal constraints on human and artificial muliti-sensory speech recognition
dc.contributor.author | Perlette, Christopher S. | |
dc.contributor.author | University of Lethbridge. Faculty of Arts and Science | |
dc.contributor.supervisor | Tata, Matthew S. | |
dc.contributor.supervisor | Zhang, John Z. | |
dc.date.accessioned | 2023-04-18T19:37:57Z | |
dc.date.available | 2023-04-18T19:37:57Z | |
dc.date.issued | 2023 | |
dc.degree.level | Masters | en_US |
dc.description.abstract | Audio Visual Speech Recognition (AVSR) is the process of perceiving and understanding speech using audio and visual information. Combining visual information with auditory stimuli has been shown to improve AVSR performance when compared to purely auditory speech recognition when the task is performed in adverse conditions with large amounts of distracting noise. This work examines the relationship of auditory and visual speech information and the effect audio-visual temporary desynchronization has on AVSR performance. Using a whole report task, we show that (1) consistent with prior similar work, performance declines asymmetrically depending on the direction and quantity of a temporal lag, and (2) a common, modern architecture for computational AVSR does not show this asymmetry indicating a fundamental difference in biological and computational AVSR methods. | en_US |
dc.description.sponsorship | Mitacs ASI NSERC | en_US |
dc.identifier.uri | https://hdl.handle.net/10133/6465 | |
dc.language.iso | en | |
dc.proquest.subject | 0984 | en_US |
dc.proquest.subject | 0317 | en_US |
dc.proquest.subject | 0384 | en_US |
dc.proquestyes | Yes | en_US |
dc.publisher | Lethbridge, Alta. : University of Lethbridge, Dept. of Neuroscience | |
dc.publisher.department | Department of Neuroscience | en_US |
dc.publisher.faculty | Arts and Science | en_US |
dc.relation.ispartofseries | Thesis (University of Lethbridge. Faculty of Arts and Science) | |
dc.subject | machine learning | |
dc.subject | speech recognition | |
dc.subject | behavioural | |
dc.subject | neuroscience | |
dc.subject | Speech processing systems--Research | |
dc.subject | Speech perception--Research | |
dc.subject | Visual perception--Research | |
dc.subject | Lipreading | |
dc.subject | Lipreading--Computer simulation | |
dc.subject | Machine learning | |
dc.subject | Neurosciences | |
dc.subject | Dissertations, Academic | |
dc.title | Temporal constraints on human and artificial muliti-sensory speech recognition | en_US |
dc.type | Thesis | en_US |