Speech Perception and Language Lab at Villanova University
Welcome to the Word Recognition & Auditory Perception Lab (WRAP Lab)! Our group studies how human listeners recognize speech and understand spoken language. Investigating language processing as it happens is central to our approach, and we use a combination of computational, cognitive neuroscience, and behavioral techniques to study these processes.
Find out more about our research on this site. Thanks for stopping by! — Joe Toscano
Here's what we've been up to lately
Undergraduate student Anne Marie Crinnion gave a talk on how her work uses tools from graph theory, namely Steiner trees, to find networks of relevant acoustic cues for fricatives.
Grad student Abby Benecke presented a poster on her computational modeling research investigating what cues are necessary for categorizing voiced versus voiceless stop consonants. Specifically, she was testing (1) if VOT alone is enough and (2) without VOT, how well the model can perform. Click here to read more.
In a paper in press at Language and Speech, Dr. Toscano and Dr. Charissa Lansing from the University of Illinois investigated how cue weights in speech perception change with age. Young adults (18-30 years old) use both voice onset time (VOT) and f0 as cues to voicing. Older adults (approx. 30-50 years old) do as well, but they rely more on f0, even though it is a less reliable cue than VOT. This shows that listeners continue to reweight acoustic cues in speech across the lifespan. Read the article here.
November 9, 2017
Reprints of conference presentations by lab members from the Fall 2017 semester are below:
Earlier this summer, graduate student Abby Benecke was among the 2017 recipients of the Graduate Travel Award from the Psychonomic Society for her research in "Classification of English Stop Consonants: A Comparison of Multiple Models of Speech Perception." Click here to read more. Way to go, Abby!
Dr. Toscano recently published a paper entitled "Gradient acoustic information induces long-lasting referential uncertainty in short discourses" with Dr. Sarah Brown-Schmidt of Vanderbilt University in Language, Cognition, and Neuroscience. The study uses the visual-world eye-tracking paradigm to show that listeners maintain activation of referents in a scene based on fine-grained acoustic information in pronouns over a multi-word delay. Check out the article here.
March 23, 2017
Post-Doctoral fellow Laura Getz along with grad students Elke Nordeen and Sarah Vrabaic recently published a paper entited, Modeling the Development of Audiovisual Cue Integration in Speech Perception."
We know that adult speech perception is generally enhanced when information is provided from multiple modalities (i.e., when you can both hear the speaker and see their lip movements). In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. How, then, do listeners learn how to process auditory and visual information as part of a unified signal? In this paper, we used a computational modeling approach to simulate the developmental time course of audiovisual speech integration. We find that this domain-general statistical learning technique provides a developmentally-plausible mechanism for understanding speech perception development.
Please contact us if you would like to learn more about our research, request a copy of a paper, are interested in joining the lab, or have any other questions. Our email address is firstname.lastname@example.org.
Scheduled to participate in a study? The main lab is located in Tolentine Hall, Room 231. Some of our experiments also take place in the eye-tracking lab in Tolentine 18A. If you're scheduled to participate in an experiment but aren't sure where to go, please come to the main lab and a research assistant will meet you there!