A Bayesian computational basis for auditory selective attention using head rotation and the interaural time-difference cue

dc.contributor.authorHambrook, Dillon A.
dc.contributor.authorIlievski, Marko
dc.contributor.authorMosadeghzad, Mohamad
dc.contributor.authorTata, Matthew S.
dc.date.accessioned2018-11-27T05:04:09Z
dc.date.available2018-11-27T05:04:09Z
dc.date.issued2017
dc.descriptionOpen access; distributed under the terms of the Creative Commons Attribution Licenseen_US
dc.description.abstractThe process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited. The neural computations for detecting interaural time difference (ITD) have been well studied and have served as the inspiration for computational auditory scene analysis systems, however a crucial limitation of ITD models is that they produce ambiguous or “phantom” images in the scene. This has been thought to limit their usefulness at frequencies above about 1khz in humans. We present a simple Bayesian model and an implementation on a robot that uses ITD information recursively. The model makes use of head rotations to show that ITD information is sufficient to unambiguously resolve sound sources in both space and frequency. Contrary to commonly held assumptions about sound localization, we show that the ITD cue used with high-frequency sound can provide accurate and unambiguous localization and resolution of competing sounds. Our findings suggest that an “active hearing” approach could be useful in robotic systems that operate in natural, noisy settings. We also suggest that neurophysiological models of sound localization in animals could benefit from revision to include the influence of top-down memory and sensorimotor integration across head rotations.en_US
dc.description.peer-reviewYesen_US
dc.identifier.citationHambrook, D. A., Ilievski, M., Mosadeghzad, M., & Tata, M. S. (2017). A Bayesian computational basis for auditory selective attention using head rotation and the interaural time-difference cue. PLoS ONE, 12(10), e0186104. Retrieved from https://doi.org/10.1371/journal. pone.0186104en_US
dc.identifier.urihttps://hdl.handle.net/10133/5249
dc.language.isoen_USen_US
dc.publisherPublic Library of Scienceen_US
dc.publisher.departmentDepartment of Neuroscienceen_US
dc.publisher.facultyArts and Scienceen_US
dc.publisher.institutionUniversity of Lethbridgeen_US
dc.subjectBayesianen_US
dc.subjectSelective attentionen_US
dc.subjectHead rotationen_US
dc.subjectComputational systemsen_US
dc.subjectInteraural time differenceen_US
dc.subjectSound localizationen_US
dc.subject.lcshAuditory selective attention
dc.subject.lcshComputational auditory scene analysis
dc.subject.lcshAcoustic localization
dc.titleA Bayesian computational basis for auditory selective attention using head rotation and the interaural time-difference cueen_US
dc.typeArticleen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Tata bayesian computational basis.pdf
Size:
3.41 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.75 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections