Fast-VGAN: Lightweight Voice Conversion with Explicit Control of F0 and Duration Parameters

Mathilde Abrassart, Nicolas Obin, Axel Roebel

STMS Lab, IRCAM, CNRS, Sorbonne Université, Paris, France

Precise control over speech characteristics, such as pitch, duration, and speech rate, remains a significant challenge in the field of voice conversion. The ability to manipulate parameters like pitch and syllable rate is an important element for effective identity conversion, but can also be used independently for voice transformation, achieving goals that were historically addressed by vocoder-based methods. In this work, we explore a convolutional neural network-based approach that aims to provide means for modifying fundamental frequency (F0), phoneme sequences, intensity, and speaker identity. Rather than relying on disentanglement techniques, our model is explicitly conditioned on these factors to generate mel spectrograms, which are then converted into waveforms using a universal neural vocoder. Accordingly, during inference, F0 contours, phoneme sequences, and speaker embeddings can be freely adjusted, allowing for intuitively controlled voice transformations. We evaluate our approach on speaker conversion and expressive speech tasks using both perceptual and objective metrics. The results suggest that the proposed method offers substantial flexibility, while maintaining high intelligibility and speaker similarity.

View the paper online

Model Introduction

Fast-VGAN [1] employs a series of 2D convolutional layers tailored to process temporally aligned sequences of F0, intensity, and phoneme representations. The architecture is designed to map these high-level temporal features into a target mel-spectrogram. To enable timbre control in a multi-speaker setting, we incorporate an additional conditioning input: a speaker identity embedding. This enables the model to account for and reproduce distinct vocal timbres associated with different speakers in the training corpus.

Training phase
Training and inference phase.

Voice Conversion

Baselines

The model is compared against two voice conversion methods: ControlVC [2] and HiFi-VC [3]. ControlVC is a conditional GAN-based model that allows for fine-grained control over the conversion process, while HiFi-VC is a high-fidelity voice conversion model that uses a similar architecture but focuses on generating high-quality audio outputs. Please note that all speakers are from the VCTK dataset [4] and have been seen during training, and the conversion is performed without any additional fine-tuning.

Source Speaker Target Speaker ControlVC HiFi-VC Fast-VGAN

p225-female

p241-male

-

-

-

p227-male

p231-female

-

-

-

p228-female

p229-female

-

-

-

p243-male

p237-male

-

-

-

Fast-VGAN versions

Source Speaker Target Speaker Fast-VGAN Amb Fast-VGAN Dil Fast-VGAN Amb+Dil

p225-female

p241-male

-

-

-

p227-male

p231-female

-

-

-

p228-female

p229-female

-

-

-

p243-male

p237-male

-

-

-

Pitch Shift and Time Stretching

Ambitus

Source Speaker 1/2 * Ambitus 2/3 * Ambitus 3/2 * Ambitus 2 * Ambitus

p241-male

-

-

-

-

p231-female

-

-

-

-

p229-female

-

-

-

-

p237-male

-

-

-

-

Pitch Shift

Source Speaker 1/2 * F0 2/3 * F0 3/2 * F0 2 * F0

p241-male

-

-

-

-

p231-female

-

-

-

-

p229-female

-

-

-

-

p237-male

-

-

-

-

Phoneme Dilation

Source Speaker 1/2 * Speech rate 3/4 * Speech rate 3/2 * Speech rate 2 * Speech rate

p241-male

-

-

-

-

p231-female

-

-

-

-

p229-female

-

-

-

-

p237-male

-

-

-

-

Expressive Voice Conversion

During training we extend VCTK with the neutral sentences from the Expresso dataset [5]. For inference we use the expressive sentences of the Expresso speakers to assess whether the model preserves sound quality for expressivity not seen during training.

Speaker 01 Speaker 02 Speaker 03 Speaker 04

Happy (original)

-

-

-

Happy (synthetized)

-

-

-

Sad (original)

-

-

-

Sad (synthetized)

-

-

-

Confused (original)

-

-

-

Confused (synthetized)

-

-

-

References

[1] M. Abrassart, N. Obin and A. Roebel, "Fast-VGAN: Lightweight Voice Conversion with Explicit Control of F0 and Duration Parameters," in Speech Synthesis Workshop (SSW), 2025.

[2] M. Chen and Z. Duan, "ControlVC: Zero-Shot Voice Conversion with Time-Varying Controls on Pitch and Speed," in Interspeech, 2023, pp. 2098-2102.

[3] A. Kashkin, I. Karpukhin and S. Shishkin, "HiFi-VC: high Quality ASR-based Voice Conversion," in Speech Synthesis Workshop (SSW), 2023, pp. 100-105.

[4] J. Yamagishi, C. Veaux, K. MacDonald et al., "CSTR VCTK Corpus: English multi-speaker corpus for CSTR voice cloning toolkit (version 0.92)," University of Edinburgh. The Centre for Speech Technology Research (CSTR), pp. 271-350, 2019.

[5] T. A. Nguyen, W.-N. Hsu, A. D'Avirro, B. Shi, I. Gat, M. Fazel-Zarani, T. Remez, J. Copet, G. Synnaeve, M. Hassid, F. Kreuk, Y. Adi, and E. Duproux, "EXPRESSO: A Benchmark and Analysis" of Discrete Expressive Speech Resynthesis," in Interspeech, 2023, pp. 4823-4827.