Audio samples from "Learning to speak fluently in a foreign language: Multilingual speech synthesis and cross-language voice cloning"

Paper: arXiv

Authors: Yu Zhang, Ron J. Weiss, Heiga Zen, Yonghui Wu, Zhifeng Chen, RJ Skerry-Ryan, Ye Jia, Andrew Rosenberg, Bhuvana Ramabhadran

Abstract: We present a multispeaker, multilingual text-to-speech (TTS) synthesis model based on Tacotron that is able to produce high quality speech in multiple languages. Moreover, the model is able to transfer voices across languages, i.e. synthesize fluent Spanish speech using an English speaker's voice, without training on any bilingual or parallel examples. Such transfer works across distantly related languages, e.g. English and Mandarin. Critical to achieving this result are: 1. using a phonemic input representation to encourage sharing of model capacity across languages, and 2. incorporating an adversarial loss term to encourage the model to disentangle its representation of speaker identity (which is perfectly correlated with language in the training data) from the speech content. Further scaling up the model by training on multiple speakers of each language, and incorporating an autoencoding input to help stabilize attention during training, results in a model which can be used to consistently synthesize intelligible speech for training speakers in all languages seen during training, and in native or foreign accents

Click here for more from the Tacotron team.

Note: To obtain the best quality, we strongly recommend readers to listen to the audio samples with headphones.

Contents

We used a proprietary dataset consisting ofspeech from 3 different languages: (1) 385 hours of high-quality English speech from 84 professional voice talents with accents from the United States, Great Britain, Australia, and Singapore; (2) 97 hours of Spanish speech from 3 female speakers include Castilian Spanish and American Spanish; (3) 68 hours of Mandarin speech from 5 speakers.



All of the phrases below are unseen during training.