Speech Perception and Language Lab at Villanova University
Welcome to the Word Recognition & Auditory Perception Lab (WRAP Lab)! Our group studies how human listeners recognize speech and understand spoken language. Investigating language processing as it happens is central to our approach, and we use a combination of computational, cognitive neuroscience, and behavioral techniques to study these processes.
Find out more about our research on this site. Thanks for stopping by! — Joe Toscano
Here's what we've been up to lately
Dr. Laura Getz and Joe Toscano published a paper in Attention, Perception, and Psychophysics exmaning the McGurk effect, an audiovisual speech illusion in which perception of speech differs from both the audio and visual information (perceiving /d/ when you see /g/ and hear /b/). They found that the effect is dependent on a number of task and subject variables, challenging the idea it can be explained as a low-level perceptual illusion.
In a paper published in PLOS ONE, Joe and Cheyenne Toscano investigated the effects of face masks on speech recognition in multi-talker babble noise. In low levels of noise, they found that different types of masks (surgical mask, N95 mask, and homemade cloth masks) generally had similar effects on speech recognition. In high levels of background noise, however, different types of masks had different effects, with surgical masks allowing for the most effective communication.
The WRAP Lab welcomes postdoctoral fellow Dr. McCall Sarrett! McCall joins us from the University of Iowa, where she completed her Ph.D. in Neuroscience. Prior to her graduate work at Iowa, McCall completed her B.A. in Neuroscience & Speech Perception at the University of Tennessee. Her research examines the cognitive and neural processes subserving speech perception, lexical competition, and second language acquisition, and uses machine learning techniques to decode speech information from neurophysiological data. Welcome McCall!
WRAP Lab presentations at Psychonomics and APCAM 2020:
Laura Getz and Joe Toscano published a review paper in WIREs Cognitive Science that discusses recent work demonstrating two key principles in speech perception: (1) gradiency (i.e., listeners are highly sensitive to fine-grained acoustic differences in the speech signal), and (2) interactivity (i.e., higher-level linguistic information feeds back down to influence early perception). The paper describes how recent work investigating the time-course of spech perception has provided evidence for both gradiency and interactivity in spoken language processing.
Anne Marie Crinnion, Beth Malmskog, and Joe Toscano recently published a paper in Psychonomic Bulletin & Review that uses techniques from graph theory to identify acoustic cues for speech sound categorization. This approach allows us to find a balance between models that are too complex (e.g., including all possible cues) and models that are too simple (i.e., not including enough cues to account for differences between talkers). This work is the result of Anne Marie's research in the lab during her undergraduate studies, which was supported by a Herchel Smith Undergraduate Research Fellowship from Harvard University.
Please contact us if you would like to learn more about our research, request a copy of a paper, are interested in joining the lab, or have any other questions. Our email address is firstname.lastname@example.org.
Scheduled to participate in a study? The main lab is located in Tolentine Hall, Room 231. Some of our experiments also take place in other labs in our building. If you're scheduled to participate in an experiment but aren't sure where to go, please come to the main lab and a research assistant will meet you there!