Articulatory Phonetics Informed Controllable Expressive Speech Synthesis
Zehua Kcriss Li1, Meiying Melissa Chen1, Yi Zhong1, Pinxin Liu2, Zhiyao Duan1
zehua.li@rochester.edu, meiying.chen@rochester.edu, yi.zhong@rutgers.edu, pliu23@ur.rochester.edu, zhiyao.duan@rochester.edu
Audio Information Research Lab, University of RochesterTL;DR: We introduce a new framework for the expressive speech synthesis inspired by Articulatory Phonetics. We created a high-quality dataset, GTR-Voice, with 125 unique GTR combinations. This allows precise control over speech synthesis, and we validated it through classification and listening tests. The dataset and models are open-source.
Accepted to Interspeech 2024
Expressive speech synthesis aims to generate speech that captures a wide range of para-linguistic features, including emotion and articulation, though current research primarily emphasizes emotional aspects over the nuanced articulatory features mastered by professional voice actors. Inspired by this, we explore expressive speech synthesis through the lens of articulatory phonetics. Specifically, we define a framework with three dimensions: Glottalization, Tenseness, and Resonance (GTR), to guide the synthesis at the voice production level. With this framework, we record a high-quality speech dataset named GTR-Voice, featuring 20 Chinese sentences articulated by a professional voice actor across 125 distinct GTR combinations. We verify the framework and GTR annotations through automatic classification and listening tests, and demonstrate precise controllability along the GTR dimensions on two fine-tuned expressive TTS models. We open-source the dataset and TTS models.
The GTR-Voice dataset contains 3.6 hours of speech audio, comprising 2500 clips, with an average duration of 6 seconds each. All of the speech was recorded by a commercial-grade professional voice artist, a 30-year-old native speaker of Mandarin Chinese. The scripts are 20 utterances, drawn from the Global TIMIT Mandarin Chinese pool, which hosts 50 speakers reciting 120 sentences from the Chinese Gigaword Fifth Edition recognized for its broad phonetic coverage. The dataset is organized by 5 Glottalization and 5 Tenseness labels, and 7 Resonance labels. Note that the label 0- Voicelessness in Glottalization is interdependent with the label 0-Whisper in Resonance, resulting in 125 distinct GTR types in total. The audio files are monaural in WAV format, with a 48kHz sampling rate and 24 bits.
Navigate through the GTR-Voice dataset using the interactive 3D plot below, which displays 125 unique GTR combinations. Each shape corresponds to a different resonance, the color indicates the level of glottalization, and the size represents tenseness. To listen to a sample, simply click on a data point, and hover over it to see the associated GTR labels. Use the legend on the right to filter by resonance—click once to filter by a specific GTR dimension, and double-click to single select or view all other dimensions but the selected.
GTR Label | Style Reference Audio | FastPitch | StyleTTS |
---|