Speech-based tests provide one way to assess hearing loss. An advantage of this approach is that it potentially provides direct information about a listener's ability to understand speech, one of the primary reasons for using an assistive device. Ideally, a speech test could be used to fit a hearing aid if needed. However, these tests have had limited success (Haskell et al., 2002).
Listeners’ error functions for individual speech sounds, centered at their 50% point. The figure shows that most tokens cluster together to form similar "z"-shaped functions once token-level error thresholds are accounted for. This suggests that adjusting for token-level differences can explain the majority of errors that listeners’ make.
One reason that speech tests have been unsuccessful is that they typically average across a range of different talkers, consonants, and contexts. As a result, they are less sensitive to subtle effects of fine-grained acoustic cues in speech. Recently (Toscano & Allen, 2014), we have shown that normal-hearing listeners are highly accurate at identifying specific speech sounds above a critical signal-to-noise threshold, defined for an individual token.
Indeed, differences in the acoustic properties of specific sounds (e.g., differences between consonants [/b/ vs. /p/], talkers [female vs. male], and coarticulatory contexts [/bI/ vs. /ba/]) account for the majority of errors made by normal-hearing listeners. Thus, in order to measure a listener's ability to understand speech, we must investigate their perception of specific tokens. An advantage of this approach is that it provides much more detailed information about deficits in a listener's ability to recognize speech.
In collaboration with our colleagues at Illinois (Dr. Jont Allen's group), we are developing a speech test based on these principles and comparing it with listeners' audiograms and self-report measures of hearing difficulty. We expect that these tests will be able to identify hearing loss, but that the speech test will provide more information about specific deficits in hearing in a shorter amount of time. In the future, this may allow audiologists to fit hearing aids based on a patient's responses to the speech test, providing better outcomes for adults who experience hearing loss.
We are also using neurophysiological measures to develop more accurate speech tests, in collaboration with our colleagues at Nemours A.I. duPont Hospital for Children (Dr. Thierry Morlet's group). This work follows from our studies using the event-related brain potential (ERP) technique to measure cortical responses to specific speech sounds (Toscano et al., 2010). By measuring how listeners process certain acoustic cues and phonetic distinctions in speech, we hope to develop tests that can detect early stages of hearing loss, as well as cases of auditory neuropathy in infants and children, which is difficult to detect using current measurement techniques.