Skip to content
Snippets Groups Projects
Name Last commit Last update
..
local
README.md
path.sh
run.sh

SpeechCodebookAnalysis

Hello and welcome to our project! Here's a brief introduction about what you can expect:

This project contains the code related to the analytical section of our research paper, "What do self-supervised speech representations encode? An analysis of languages, varieties, speaking styles and speakers", which has been accepted for Interspeech 2023 in Dublin.

As of now, this project is a placeholder. We're still in the process of polishing the final details. However, we assure you that the complete project will be up and running before the conference commences.

Stay tuned for updates and we appreciate your interest in our work. Please continue exploring this README for more details on the project setup, codebase, and how to navigate through it.

Dependencies

  • Python3.8
  • fairseq
  • matplotlib

Repository structure

The repository includes a main script (run.sh), a folder which includes Python scripts (local/*py) and an example data folder (DATA/BEAGR/). If you want to work with your own data you would need to prepare a folder on your own which follows a specific folder structure.

The example data folder includes example files from the BEA corpus (Hungarian) and the GRASS corpus (Austrian German) which makes it possible to run an experiment from scratch. In general, the speech data to be analyzed should be stored in the folder DATA/. In case of the example experiment, this folder (DATA/BEAGR/) has the following structure:

  • DATA/BEAGR/data_BEA_CS
    • Various speaker (spkID1, spkID2, ...) folders
      • Various .wav or .flac files (fs=16kHz)
  • DATA/BEAGR/data_BEA_RS
    • Various speaker (spkID1, spkID2, ...) folders
      • Various .wav or .flac files (fs=16kHz)
  • DATA/BEAGR/data_GR_CS
    • Various speaker (spkID1, spkID2, ...) folders
      • Various .wav or .flac files (fs=16kHz)
  • DATA/BEAGR/data_GR_RS
    • Various speaker (spkID1, spkID2, ...) folders
      • Various .wav or .flac files (fs=16kHz)

The example folder BEAGR (which must be placed in DATA/) sort of defines one experiment and includes the subfolders data_BEA_CS (BEA Spontaneous Speech), data_BEA_RS (BEA Read Speech), data_GR_CS (GRASS Conversational Speech) and data_GR_RS (GRASS Read Speech). Please make sure that those folders are named like this: data_{corpus}_{speakingstyle}. The audio files should have a sampling rate of 16kHz and can be .wav or .flac files. Given this structure and after installing/preparing all dependencies (see below) you should be able to run the experiment.

To run a specific stage of the script for a specific dataset, provide the directory where all your data is stored (here DATA/BEAGR/), an experiment name (here BEAGR) and an integer as an argument to the ./run.sh command. For instance, to run stage 3 for the example dataset DATA/BEAGR/ with the experiment name BEAGR, you would use the following command:

./run.sh DATA/BEAGR/ BEAGR 3

The command automatically generates the experiment folder exp_BEAGR. Note that stage 0 deletes this entire experiment folder (if it exists) and restarts the entire experiment by running all stages in a row (see below an overview of the stages).

Reproduction

The following steps are necessary to reproduce the experiment. At first you need to create a conda envrionment and install the necessary packages. Second you have to clone the fairseq repository and modify the file path.sh to export necessary environment variables.

Conda environment

At first you should create your environment:

conda create -n speechcodebookanalysis python=3.8
conda activate speechcodebookanalysis

Then, you need to install the following packages:

pip install fairseq
pip install matplotlib
pip install scikit-learn
pip install faiss-cpu

When the environment is created also generate the file conda.sh which could look like this:

source */anaconda3/etc/profile.d/conda.sh
conda activate speechcodebookanalysis

The file conda.sh is sourced at the beginning of run.sh.

Fairseq Repository

You need to clone the fairseq repository to another directory (e.g., ../fairseq).

git clone https://github.com/facebookresearch/fairseq.git

Make sure to modify the file path.sh in order to export the necessary environment variables. The file path.sh is also sourced at the beginning of run.sh.

Model File

You need to download and store a model file. In the main script (run.sh) you can specify the model_path. This study is based on the large pretrained model XLSR-53 which can be downloaded here https://github.com/facebookresearch/fairseq/blob/main/examples/wav2vec.

Unfortunately loading/initializing the model with version fairseq 0.12.2 lead to errors because of mismatches with respect to dictionary keys. Anyway, we provide a script (local/create_xlsr_new.py) which stores a new version of the model preventing those errors (see also https://github.com/facebookresearch/fairseq/issues/3741).

Stages

Here is a short overview of the stages:

  • stage=0: deletes experiment folder (if it exists) and runs all subsequent stages in a row
  • stage=1:
    • prepares the data given an experiment folder (e.g., DATA/BEAGR/)
    • resulting files are stored in exp_*/data/
  • stage=2:
    • counts frequencies of used codebook entries per speaker
    • if VERBOSE is true this stage also generates log-files
    • if you need to extract features with a CPU, set device = torch.device('cpu') in the script local/codebook_freqs.py (default is device = torch.device('cuda'))
    • resulting files are stored in exp_*/logs/, exp_*/numpy/ andexp_*/txt/
  • stage=3:
    • prepares and stores a similarty matrix in the folder exp_*/numpy/
  • stage=4:
    • performs a PCA on the similarity matrix and plots the PCA space
    • resulting *.png-files are stored in exp_*/plots/analysis/
  • stage=5:
    • performs k-means on the resulting PCA space and assigns classes
    • the parameter nclust in the script run.sh specifies the number of allowed clusters which should be modified depending on the task
    • resulting *.png-files (confusion matrices) are stored in exp_*/plots/kmenas/