The technique understood as "phase vocoding" simply means breaking the signal apart into different components according to frequencies present in the original signal.
In this realization, which was first presented by Richard F. Moore in his book Elements of Computer Music (Prentice-Hall 1990), the signal is first split into windows of suitable size and then Fourier-transformed into frequency-space. The synthesis side then assembles the frequency-amplitude pairs according to the user's instructions.
One of the basic applications of a phase vocoder is the altering of the time-scale of a file.
A new SPKitTimeStretch-object is introduced with
SPKitTimeStretch stretcher;
. The connection to the
sample-based input object and the definition of the stretchFactor
are made with
stretcher.setInputAndStretchFactor(&reader, stretch);
.
The stretch factor is a reasonable positive value. Normal time-stretching values might be 0.25, 0.5, 2.0 and 4.0, for instance. The value 2.0 makes the signal two times slower and the file twice as long.
Here's an example.
SPKitTimeStretch consists of the following objects:
The phase vocoder also uses the classes:
Mikko Rinne / University of Helsinki / mikko.rinne@helsinki.fi