Tata, Matthew

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 6
  • Item
    AI-powered speech device as a tool for neuropsychological assessment of an older adult population: a preliminary study
    (Elsevier, 2025) Aguilar Ramirez, Daniela E.; Grasse, Lukas; Stone, Scott; Tata, Matthew; Gonzalez, Claudia L. R.
    As the older adult population continues to expand, the demands on the healthcare system intensifies, necessitating the development of technologies that effectively accommodate the requirements of older adults. While Artificial Intelligence (AI) systems hold promise as a solution, they have not been designed to accommodate the sensory and cognitive changes typical of aging individuals. The current study investigates the use of an AI-powered communication device for the assessment of neuropsychological tests to an older adult population. Twenty-four (twelve females) older adult participants completed three memory tasks using the AI device: logical memory, poem recall, and the backward and sequencing digit span tests. Significant negative correlations were found between the age of the participants and performance on the Logical memory and digit span tests. The AI device effectively identified age-related memory changes comparable to those observed with human administrators. Implementing this technology in healthcare offers several advantages: alleviating healthcare professionals' workload, improving standard of care by reaching underserved populations, and facilitating continuous screening for early identification of prodromal stages of neurodegenerative diseases.
  • Item
    A Bayesian computational basis for auditory selective attention using head rotation and the interaural time-difference cue
    (Public Library of Science, 2017) Hambrook, Dillon A.; Ilievski, Marko; Mosadeghzad, Mohamad; Tata, Matthew S.
    The process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited. The neural computations for detecting interaural time difference (ITD) have been well studied and have served as the inspiration for computational auditory scene analysis systems, however a crucial limitation of ITD models is that they produce ambiguous or “phantom” images in the scene. This has been thought to limit their usefulness at frequencies above about 1khz in humans. We present a simple Bayesian model and an implementation on a robot that uses ITD information recursively. The model makes use of head rotations to show that ITD information is sufficient to unambiguously resolve sound sources in both space and frequency. Contrary to commonly held assumptions about sound localization, we show that the ITD cue used with high-frequency sound can provide accurate and unambiguous localization and resolution of competing sounds. Our findings suggest that an “active hearing” approach could be useful in robotic systems that operate in natural, noisy settings. We also suggest that neurophysiological models of sound localization in animals could benefit from revision to include the influence of top-down memory and sensorimotor integration across head rotations.
  • Item
    Correction: Dynamics of distraction: competition among auditory streams modulates gain and disrupts inter-trial phase coherence in the human electroencephalogram
    (Public Library of Science, 2013) Ponjavic-Conte, Karla D.; Hambrook, Dillon A.; Pavlovic, Sebastian; Tata, Matthew S.
  • Item
    Dynamics of distraction: competition among auditory streams modulates gain and disrupts inter-trial phase coherence in the human electroencephalogram
    (Public Library of Science, 2013) Ponjavic-Conte, Karla D.; Hambrook, Dillon A.; Pavlovic, Sebastian; Tata, Matthew S.
    Auditory distraction is a failure to maintain focus on a stream of sounds. We investigated the neural correlates of distraction in a selective-listening pitch-discrimination task with high (competing speech) or low (white noise) distraction. Highdistraction impaired performance and reduced the N1 peak of the auditory Event-Related Potential evoked by probe tones. In a series of simulations, we explored two theories to account for this effect: disruption of sensory gain or a disruption of inter-trial phase consistency. When compared to these simulations, our data were consistent with both effects of distraction. Distraction reduced the gain of the auditory evoked potential and disrupted the inter-trial phase consistency with which the brain responds to stimulus events. Tones at a non-target, unattended frequency were more susceptible to the effects of distraction than tones within an attended frequency band.
  • Item
    Rendering visual events as sounds: spatial attention capture by auditory augmented reality
    (Public Library of Science, 2017) Stone, Scott A.; Tata, Matthew S.
    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible