Speech Perception and Language Lab at Villanova University
Welcome to the Word Recognition & Auditory Perception Lab (WRAP Lab)! Our group studies how human listeners recognize speech and understand spoken language. Investigating language processing as it happens is central to our approach, and we use a combination of computational, cognitive neuroscience, and behavioral techniques to study these processes.
Find out more about our research on this site. Thanks for stopping by! — Joe Toscano
Here's what we've been up to lately
A new paper by WRAP Lab grad student, Lexie Tabachnick, published in the Journal of Speech, Language, and Hearing Research, investigates how auditory brainstem responses to tones vary as a function stimulus frequency. This study demonstrates that the ABR can provide a useful index of perceptual encoding across a wide range of sound frequencies, with the amplitude of the ABR precisely tracking tone frequency from 500 to 8000 Hz. This provides a crucial counterpart to our work on perceptual encoding in cortical responses and offers new insights into how the ABR can be used to study auditory perception in normal-hearing listeners and to measure effects of hearing loss.
Click here to see the paper.
In a new paper published in Brain & Language, we used the fast optical imaging technique to study the time-course of speech perception. We show that the brain encodes sounds in terms of continuous acoustic cues at early stages of perception and rapidly begins to categorize them based on phonological differences. This technique allows us to study these responses non-invasively in human subjects. Check out the paper here.
Is it "yanny" or is it "laurel"? Click here to read our lab's explantion of this illusion!
How do talkers indicate information about discourse status through differences in specific acoustic cues, and how is this affected by communicative context? In a new paper published in Discourse Processes with our colleagues Andrés Buxó-Lugo and Duane Watson, we show that game-based approaches (specifically using Minecraft) allow us create naturalistic experiments for studying speech communication in the lab, revealing differences in the reliability of cues across communicative contexts.
Undergraduate student Anne Marie Crinnion gave a talk on how her work uses tools from graph theory, namely Steiner trees, to find networks of relevant acoustic cues for fricatives.
Grad student Abby Benecke presented a poster on her computational modeling research investigating what cues are necessary for categorizing voiced versus voiceless stop consonants. Specifically, she was testing (1) if VOT alone is enough and (2) without VOT, how well the model can perform. Click here to read more.
In a paper in press at Language and Speech, Dr. Toscano and Dr. Charissa Lansing from the University of Illinois investigated how cue weights in speech perception change with age. Young adults (18-30 years old) use both voice onset time (VOT) and f0 as cues to voicing. Older adults (approx. 30-50 years old) do as well, but they rely more on f0, even though it is a less reliable cue than VOT. This shows that listeners continue to reweight acoustic cues in speech across the lifespan. Read the article here.
Please contact us if you would like to learn more about our research, request a copy of a paper, are interested in joining the lab, or have any other questions. Our email address is firstname.lastname@example.org.
Scheduled to participate in a study? The main lab is located in Tolentine Hall, Room 231. Some of our experiments also take place in the eye-tracking lab in Tolentine 18A. If you're scheduled to participate in an experiment but aren't sure where to go, please come to the main lab and a research assistant will meet you there!