Paper: arXiv
Authors: Nobuyuki Morioka, Heiga Zen, Nanxin Chen, Yu Zhang, Yifan Ding
Abstract: Adapting a neural text-to-speech (TTS) model to a target speaker typically involves fine-tuning most if not all of the parameters of a pretrained multi-speaker backbone model. However, serving hundreds of fine-tuned neural TTS models is expensive as each of them requires significant footprint and separate computational resources (e.g., accelerators, memory). To scale speaker adapted neural TTS voices to hundreds of speakers while preserving the naturalness and speaker similarity, this paper proposes a parameter-efficient few-shot speaker adaptation, where the backbone model is augmented with trainable lightweight modules called residual adapters. This architecture allows the backbone model to be shared across different target speakers. Experimental results show that the proposed approach can achieve competitive naturalness and speaker similarity compared to the full fine-tuning approaches, while requiring only ~0.1% of the backbone model parameters for each speaker.
Click here for more from the Tacotron team.
Random samples from the models listed in Table 1 in the paper.
Natural Speech | w/o backbone data | w/ backbone data |
---|---|---|
1: | ||
2: | ||
3: | ||
4: | ||
5: | ||
6: | ||
Random samples from the models listed in Table 2 in the paper. 'rd' and 'rv' denote the bottleneck dimension of residual adapters in the decoder and the variance adapters respectively.
Natural Speech | rd=16 | rd=128 | rd=16, rv=8 | rd=128, rv=64 | |
---|---|---|---|---|---|
1: | |||||
2: | |||||
3: | |||||
4: | |||||
5: | |||||
6: | |||||
Random samples from the models listed in Table 3 in the paper.
Natural Speech | 30 min | 5 min | 1 min | ||
---|---|---|---|---|---|
1: | |||||
2: | |||||
3: | |||||
4: | |||||
5: | |||||
6: | |||||
Random samples from the models listed in Table 4 in the paper.
Natural Speech | Residual Adapters | Fine-tuning (30 min + backbone data) | Fine-tuning (30 min) | Fine-tuning (5 min) | Fine-tuning (1 min) |
---|---|---|---|---|---|
1: | |||||
2: | |||||
3: | |||||
4: | |||||
5: | |||||
6: | |||||
Random samples from the models listed in Table 5 in the paper.
Natural Speech | Residual Adapters (1 min) | Residual Adapters (1 utterance) | Zero-shot d-vector (1 min) | Zero-shot d-vector (1 utterance) | Fine-tuning speaker embedding only (1 min) |
Fine-tuning speaker embedding only (1 utterance) |
---|---|---|---|---|---|---|
1: | ||||||
2: | ||||||
3: | ||||||
4: | ||||||
5: | ||||||
6: | ||||||