Research

phonetics. phonology. neuroscience.

I am a linguist and neuroscientist studying the organization and representation of human speech sounds in the brain. I am interested in the perception, production, and processing of linguistic prosody (e.g. stress, intonation) and other suprasegmentals such as tone. My research integrates insights from linguistic theory and typology with computational and experimental methods from laboratory phonology, neuroscience, and psychology. As a linguist, I work to model the diversity of phonological patterns found in the languages of the world, and as a neuroscientist, I take an integrative, systems-level approach to understanding the neural mechanisms underpinning spoken language processing.

Recent Projects

Modelling Suprasegmental Phonology
in the Gradient Symbolic Computation framework


This work is part of a larger collaboration with Matt Goldrick, Eric Bakovic, Eric Meinhart, and Adam McCollum to model phonological patterns using a neural network approach known as Gradient Symbolic Computation (GSC).

Prosodic Organization of Hip-Hop Flow


The goal of this project is to develop a comprehensive framework for the study of prosodic organization in rhythmically complex rap verse. The relationship between linguistic rhythm and musical rhythm in hip-hop varies across tracks and emcees, and this project aims to determine whether those differences can be attributed to differences in the alignment of prosodic units (i.e. syllables, stresses) to points of metric prominence in the musical accompaniment (i.e. 'strong beats,' beats 1 & 3).

To adequately account for microtiming in styles of rapped verse which are more speech-like in their delivery than song-like, verses analysed in this project are annotated in Praat TextGrids using visual spectrogram cues. The verses being analysed as part of this project are delivered by emcees JAY-Z, Missy Elliott, Kanye West, and Nicki Minaj.

Testing the Vocal-Vagal Hypothesis


Vocal cues such as pitch range, pitch variability, speech rate, and speech rhythm can carry information about a speaker's emotional state or pragmatic intent independent of the semantic meaning of the words spoken. It is thought that listeners automatically detect and emotionally respond to these prosodic cues. When production and reception of these cues are impaired, as in Autism, deficits in social engagement result. A neural circuit has been recently identified that could explain how emotional states regulate these aspects of vocal prosody. This Vocal-Vagal circuit has the connectivity to send emotional information by way of the vagus nerve through a brainstem area called the Nucleus Ambiguus to the muscles of vocal production. However, a direct correlation between vagal activity and vocal prosody remains to be established, in part because of the lack of robust methods for measuring either vagal activity or vocal prosody.

To approach this problem we use machine learning to identify acoustic features in vocal recordings that are correlated with emotional content according to three different criteria: prosody as scored by linguists, prosody as clinically scored in Autism cases, or vagal activity as measured from cardiopulmonary data using a new improved algorithm. We are further testing whether these acoustic features induce mirroring emotional responses (corresponding changes in vagal activity) in listeners. We predict that singing will increase vagal activity due to proprioceptive feedback within this circuit, and one of the major goals of the project is to test this prediction. If confirmed, we will use singing to increase vagal activity in subjects, and test for the predicted increase in prosodic vocal features. Together these studies will test the proposed role of the vocal-vagal neural pathway in emotional communication, social co-regulation, and emotional self-regulation.

This project is headed by Pamela Reinagel and is undertaken in collaboration with Tim Gentner, Eric Bakovic, and Linda Hill. The project description presented here is partially taken from the project abstract.