Audio samples from "Residual Adapters for Few-Shot Text-to-Speech Speaker Adaptation"

Paper: arXiv

Authors: Nobuyuki Morioka, Heiga Zen, Nanxin Chen, Yu Zhang, Yifan Ding

Abstract: Adapting a neural text-to-speech (TTS) model to a target speaker typically involves fine-tuning most if not all of the parameters of a pretrained multi-speaker backbone model. However, serving hundreds of fine-tuned neural TTS models is expensive as each of them requires significant footprint and separate computational resources (e.g., accelerators, memory). To scale speaker adapted neural TTS voices to hundreds of speakers while preserving the naturalness and speaker similarity, this paper proposes a parameter-efficient few-shot speaker adaptation, where the backbone model is augmented with trainable lightweight modules called residual adapters. This architecture allows the backbone model to be shared across different target speakers. Experimental results show that the proposed approach can achieve competitive naturalness and speaker similarity compared to the full fine-tuning approaches, while requiring only ~0.1% of the backbone model parameters for each speaker.

Click here for more from the Tacotron team.

Evaluation 1: Residual adapters trained w/ and w/o backbone training data

Random samples from the models listed in Table 1 in the paper.

Natural Speechw/o backbone dataw/ backbone data
1:
2:
3:
4:
5:
6:

Evaluation 2: Effect of inserting different size residual adapters into the decoder and the variance adapters

Random samples from the models listed in Table 2 in the paper. 'rd' and 'rv' denote the bottleneck dimension of residual adapters in the decoder and the variance adapters respectively.

Natural Speechrd=16rd=128rd=16, rv=8rd=128, rv=64
1:
2:
3:
4:
5:
6:

Evaluation 3: Effect of varying the amount of target speaker data

Random samples from the models listed in Table 3 in the paper.

Natural Speech30 min5 min1 min
1:
2:
3:
4:
5:
6:

Evaluation 4: Comparison against fine-tuning

Random samples from the models listed in Table 4 in the paper.

Natural SpeechResidual AdaptersFine-tuning (30 min + backbone data)Fine-tuning (30 min)Fine-tuning (5 min)Fine-tuning (1 min)
1:
2:
3:
4:
5:
6:

Evaluation 5: Comparison against zero-shot d-vector and few-shot fine-tuning baselines

Random samples from the models listed in Table 5 in the paper.

Natural Speech Residual Adapters (1 min) Residual Adapters (1 utterance) Zero-shot d-vector (1 min) Zero-shot d-vector (1 utterance) Fine-tuning speaker embedding only
(1 min)
Fine-tuning speaker embedding only
(1 utterance)
1:
2:
3:
4:
5:
6: