SpeechCodebookAnalysis
Hello and welcome to our project! Here's a brief introduction about what you can expect:
This project contains the code related to the analytical section of our research paper, "What do self-supervised speech representations encode? An analysis of languages, varieties, speaking styles and speakers", which has been accepted for Interspeech 2023 in Dublin.
As of now, this project is a placeholder. We're still in the process of polishing the final details. However, we assure you that the complete project will be up and running before the conference commences.
Stay tuned for updates and we appreciate your interest in our work. Please continue exploring this README for more details on the project setup, codebase, and how to navigate through it.
Dependencies
- Python3.8
- fairseq
- matplotlib
Repository structure
The repository includes a main script (run.sh
), a folder named local which includes Python scripts (local/*py
) and an example data folder (if you want to work with your own data you would need to prepare this folder).
The example data folder includes example files from the BEA corpus (Hungarian) and the GRASS corpus (Austrian German) which makes it possible to run an experiment from scratch. In general, the speech data should be stored in the folder DATA
and in case of the example experiment folder BEAGR
it should get clear how a specific speech data folder should be structured:
- DATA/BEAGR/data_BEA_CS
- Various speaker (spkID1, spkID2, ...) folders
- Various .wav or .flac files (fs=16kHz)
- Various speaker (spkID1, spkID2, ...) folders
- DATA/BEAGR/data_BEA_RS
- Various speaker (spkID1, spkID2, ...) folders
- Various .wav or .flac files (fs=16kHz)
- Various speaker (spkID1, spkID2, ...) folders
- DATA/BEAGR/data_GR_CS
- Various speaker (spkID1, spkID2, ...) folders
- Various .wav or .flac files (fs=16kHz)
- Various speaker (spkID1, spkID2, ...) folders
- DATA/BEAGR/data_GR_RS
- Various speaker (spkID1, spkID2, ...) folders
- Various .wav or .flac files (fs=16kHz)
- Various speaker (spkID1, spkID2, ...) folders
The example folder BEAGR
(which must be placed in DATA/
) sort of defines one experiment and includes the subfolders data_BEA_CS
(BEA Spontaneous Speech), data_BEA_RS
(BEA Read Speech), data_GR_CS
(GRASS Conversational Speech) and data_GR_RS
(GRASS Read Speech). Please make sure that those folders are named like this: data_{corpus}_{speakingstyle}
. The audio files should have a sampling rate of 16kHz and can be .wav or .flac files. Given this structure and after installing/preparing all dependencies (see below) you should be able to run the experiment.
To run a specific stage of the script for a specific dataset, provide the directory where all your data is stored (here DATA/BEAGR
), an experiment name (here BEAGR
) and an integer as an argument to the ./run.sh
command. For instance, to run stage 3
for the example dataset DATA/BEAGR
with the experiment name BEAGR
, you would use the following command:
./run.sh DATA/BEAGR/ BEAGR 3
The command automatically generates the experiment folder exp_BEAGR
. Note that stage 0
deletes this entire experiment folder if it already existed and restarts the entire experiment by running all stages in a row (see below an overview of the stages).
Reproduction
The following steps are necessary to reproduce the experiment. At first you need to create a conda envrionment and install the necessary packages. Second you have to clone the fairseq repository and modify the file path.sh
to export necessary environment variables.
Conda environment
You need to install the following packages:
conda create -n speechcodebookanalysis python=3.8
conda activate speechcodebookanalysis
pip install fairseq
pip install matplotlib
pip install scikit-learn
pip install faiss-cpu
When the envrionment is created also generate the file conda.sh
which could look like this:
source */anaconda3/etc/profile.d/conda.sh
conda activate speechcodebookanalysis
The file conda.sh
is sourced at the beginning of run.sh
.
Fairseq Repository
You need to clone the fairseq repository to another directory (e.g., ../fairseq
).
git clone https://github.com/facebookresearch/fairseq.git
Make sure to modify the file path.sh
in order to export the necessary environment variables. The file path.sh
is also sourced at the beginning of run.sh
.
Stages
Here is a short overview of the stages:
-
stage=0
: deletes experiment folder if it exists and runs all subsequent stages in a row -
stage=1
:- prepares the data given an experiment folder (e.g.,
DATA/BEAGR
) - resulting files are stored in
exp_*/data/
- prepares the data given an experiment folder (e.g.,
-
stage=2
:- counts frequencies of used codebook entries per speaker
- if VERBOSE is true this stage also generates log-files
-
ATTENTION: if you need to extract features with a CPU, set
device = torch.device('cpu')
in the scriptlocal/codebook_freqs.py
(default isdevice = torch.device('gpu')
) - resulting files are stored in
exp_*/logs/
,exp_*/numpy/
andexp_*/txt/
-
stage=3
:- prepares and stores a similarty matrix in the folder
exp_*/numpy/
- prepares and stores a similarty matrix in the folder
-
stage=4
:- performs a PCA on the similarity matrix and plots the PCA space
- resulting
*.png
-files are stored inexp_*/plots/analysis/
-
stage=5
:- performs k-means on the resulting PCA space and assigns classes
-
ATTENTION: the parameter
nclust
in the scriptrun.sh
specifies the number of allowed clusters which should be modified depending on the task - resulting
*.png
-files (confusion matrices) are stored inexp_*/plots/kmenas/