VoiceFilter: Targeted Voice Separation by Speaker-Conditioned Spectrogram Masking

Paper: arXiv

Authors: Quan Wang *, Hannah Muckenhirn *, Kevin Wilson, Prashant Sridhar, Zelin Wu, John Hershey, Rif A. Saurous, Ron J. Weiss, Ye Jia, Ignacio Lopez Moreno. (*: Equal contribution.)

Abstract: In this paper, we present a novel system that separates the voice of a target speaker from multi-speaker signals, by making use of a reference signal from the target speaker. We achieve this by training two separate neural networks: (1) A speaker recognition network that produces speaker-discriminative embeddings; (2) A spectrogram masking network that takes both noisy spectrogram and speaker embedding as input, and produces a mask. Our system significantly reduces the speech recognition WER on multi-speaker signals, with minimal WER degradation on single-speaker signals.

System architecture:

Video demo:

Random audio samples from LibriSpeech testing set

VoiceFilter model: CNN + bi-LSTM + fully connected

Apply VoiceFilter on noisy audio (2 speakers)

Meaning of the columns in the table below:

  1. The noisy audio input to the VoiceFilter. It's generated by summing the clean audio with an interference audio from another speaker.
  2. The output from the VoiceFilter.
  3. The reference audio from which we extract the d-vector. The d-vector is another input to the VoiceFilter. This audio comes from the same speaker as the clean audio.
  4. The clean audio, which is the ground truth.

Noisy audio input Enhanced audio output Reference audio for d-vector Clean audio (ground truth)

Apply VoiceFilter on clean audio (single speaker)

Meaning of the columns in the table below:

  1. The clean audio, which we feed as the input to the VoiceFilter.
  2. The output from the VoiceFilter.
  3. The reference audio from which we extract the d-vector. The d-vector is another input to the VoiceFilter. This audio comes from the same speaker as the clean audio.

Clean audio input Enhanced audio output Reference audio for d-vector

Dataset information

For training and evaluating our VoiceFilter models, we had been using the VCTK and LibriSpeech datasets.

Here we provide the division of training-vs-testing as CSV files. Each line of the CSV files is a tuple of three utterance IDs:

(clean utterance, utterance for computing d-vector, interference utterance)

  • VCTK training and testing
  • LibriSpeech training and testing
  • Media coverage

  • VentureBeat
  • Tproger
  • 机器之心
  • 新智元