VoiceFilter: Targeted Voice Separation by Speaker-Conditioned Spectrogram Masking

Paper: arXiv

Authors: Quan Wang *, Hannah Muckenhirn *, Kevin Wilson, Prashant Sridhar, Zelin Wu, John Hershey, Rif A. Saurous, Ron J. Weiss, Ye Jia, Ignacio Lopez Moreno. (*: Equal contribution.)

Abstract: In this paper, we present a novel system that separates the voice of a target speaker from multi-speaker signals, by making use of a reference signal from the target speaker. We achieve this by training two separate neural networks: (1) A speaker recognition network that produces speaker-discriminative embeddings; (2) A spectrogram masking network that takes both noisy spectrogram and speaker embedding as input, and produces a mask. Our system significantly reduces the speech recognition WER on multi-speaker signals, with minimal WER degradation on single-speaker signals.

System architecture:

Lectures:



Video demos:



Random audio samples from LibriSpeech testing set

VoiceFilter model: CNN + bi-LSTM + fully connected

Apply VoiceFilter on noisy audio (2 speakers)

Meaning of the columns in the table below:

  1. The noisy audio input to the VoiceFilter. It's generated by summing the clean audio with an interference audio from another speaker.
  2. The output from the VoiceFilter.
  3. The reference audio from which we extract the d-vector. The d-vector is another input to the VoiceFilter. This audio comes from the same speaker as the clean audio.
  4. The clean audio, which is the ground truth.

Noisy audio input Enhanced audio output Reference audio for d-vector Clean audio (ground truth)

Apply VoiceFilter on clean audio (single speaker)

Meaning of the columns in the table below:

  1. The clean audio, which we feed as the input to the VoiceFilter.
  2. The output from the VoiceFilter.
  3. The reference audio from which we extract the d-vector. The d-vector is another input to the VoiceFilter. This audio comes from the same speaker as the clean audio.

Clean audio input Enhanced audio output Reference audio for d-vector

FAQ

Can you share your code?

Unfortunately, the original VoiceFilter system heavily depends on Google's internal infrastructure and data, thus cannot be open sourced.

However, thanks for the efforts by Seungwon Park, a third-party unofficial implementation using PyTorch is available on GitHub:

https://github.com/mindslab-ai/voicefilter

In Table 4 of your paper, why SDR of "No VoiceFilter" is so high? Does it mean your problem is very easy?

No. The high SDR is because of the way we prepare our training data - we trim the noisy audio to be the same length as the clean audio for training.

For example, if the clean audio is 5 seconds, the interference audio is 1 second, then the resulting noisy audio will consist of 1 second of really noisy audio, and 4 seconds of actually clean audio. Thus a big part of our "noisy" audios are actually clean, and when we compute the SDR statistically, we get very large numbers (especially the mean).

However, since we use the same volume for clean and interference audios, if we only look at the really noisy parts of the noisy audio, the theoretically SDR should be 0.

So the conclusion: We are NOT solving a problem which is easier than others.

Dataset information

For training and evaluating our VoiceFilter models, we had been using the VCTK and LibriSpeech datasets.

Here we provide the division of training-vs-testing as CSV files. Each line of the CSV files is a tuple of three utterance IDs:

(clean utterance, utterance for computing d-vector, interference utterance)

  • VCTK training and testing
  • LibriSpeech training and testing
  • Media coverage

  • VentureBeat
  • Tproger
  • 机器之心
  • 新智元
  • Medium