Authors: Authors Sungjae Cho, Sejik Park, Tae-Ho Kim, Soo-Young Lee
Abstract:
Most recent emotional speech synthesizers have been studied with a large training data.
These systems require a sufficient number of audios to be recorded with respect to different emotions for each speaker.
Acquiring emotional speech is more expensive than acquiring neutral speech because it requires professional acting ability
to express natural emotional utterance in the voice recording environment. Thus, it would be economical, beneficial to
transfer decent emotional prosody to neutral voice. We demonstrate our system can learn to speak the emotional speech of
multiple speakers from their emotional audios, and transfer emotional prosody to the voice of a speaker who provides only
neutral speech. Our system is a neural network architecture that synthesizes speech directly from text and emotion and
speaker identifiers. This architecture is mainly composed of two components: modified Tacotron 2 and original WaveGlow.
Tacotron 2 is a recurrent sequence-to-sequence network that maps character embeddings to mel-spectrograms; WaveGlow is
a vocoder to synthesize time-domain waveforms from those spectrograms. The modified Tacotron 2 has been trained to
synthesize speech from text depending on emotions and speakers by modification of injecting emotion and speaker encoding
into the decoder part of Tacotron 2. This allows the system to learn to synthesize not only emotional speech of speakers
with emotional audios but also that of a speaker without emotional audios. In this demo, audience can interactively enter
any sentence to the speech synthesis system. Additionally, speech synthesis markup language (SSML) has been incorporated
to easily control prosody of spoken input text. With the provided SSML, audience can manipulate emotion and speaker as
well as three basic components: rate, volume, and pitch for fine-tuning. These three components are controllable at the
character-level. The positions of characters spoken in the mel-spectrogram are estimated from which characters are highly
attended to generate each mel-spectrogram.
In this demo, we are unable to provide an interactive environment that incorporates SSML to easily control prosody
of input text. We aplogize for no provision of an interactive environment.
All of the below phrases are unseen by our TTS model during training.
Demo 1: Multi-speaker emotional TTS
In the first demo, we are demonstrating multi-speaker emotional TTS.
Through our system, you can synthesize 5 emotional voices across two
female speakers: B and J. In the training data, we had neutral and
emotional speech audios of the two speakers.
This is a major difference with the second demo.
I'm going to play 3 examples for each emotion-speaker pair.
What we want you to pay attention to is how different given audios
are across emotions and speakers. First, you will listen to a neural
voice, and then an emotional voice will be presented.
Speaker
B
J
Emotion
Neutral
Amused
Neutral
Amused
Script
I'm amused. I'm really amused.
Audio
Script
What is that?
Audio
Script
I have first seen this in my life.
Audio
Emotion
Neutral
Angry
Neutral
Angry
Script
I'm angry. I'm really angry.
Audio
Script
It makes me so upset.
Audio
Script
What is this stupid thing?
Audio
Emotion
Neutral
Disgusted
Neutral
Disgusted
Script
I'm disgusted. I'm really disgusted.
Audio
Script
It smells so disgusting.
Audio
Script
It feels really strange.
Audio
Emotion
Neutral
Sleepy
Neutral
Sleepy
Script
I'm sleepy. I'm really sleepy.
Audio
Script
It is too late. It is time to go to bed.
Audio
Script
I didn't remember that scene. It was so bored.
Audio
Demo 2: Emotional TTS spoken by a neutral speaker
In the second demo, we are demonstrating emotional TTS spoken by
a neutral speaker.
Through our system, you can synthesize 5 emotional voices of
a female speaker, L.
In the training data, there were only neutral speech audios of
the L speaker.
Our system can generate emotional speech of the L speaker because
emotional prosody has been transferred from the other two speakers'
emotional prosody by jointly learning their emotional speech.
Let’s listen to emotional voices of the L speaker.
Speaker
L
Emotion
Neutral
Amused
Script
I'm amused. I'm really amused.
Audio
Script
What is that?
Audio
Script
I have first seen this in my life.
Audio
Emotion
Neutral
Angry
Script
I'm angry. I'm really angry.
Audio
Script
It makes me so upset.
Audio
Script
What is this stupid thing?
Audio
Emotion
Neutral
Disgusted
Script
I'm disgusted. I'm really disgusted.
Audio
Script
It smells so disgusting.
Audio
Script
It feels really strange.
Audio
Emotion
Neutral
Sleepy
Script
I'm sleepy. I'm really sleepy.
Audio
Script
It is too late. It is time to go to bed.
Audio
Script
I didn't remember that scene. It was so bored.
Audio
Acknowledgement
This work was supported by Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) [2016-0-00562(R0124-16-0002), Emotional Intelligence Technology to Infer Human Emotion and Carry on Dialogue Accordingly], and Ministry of Culture, Sports and Tourism(MCST) and Korea Creative Content Agency(KOCCA) in the Culture Technology(CT) Research & Development Program 2019.