@@ -17,26 +17,26 @@ Stay tuned for updates and we appreciate your interest in our work. Please conti
...
@@ -17,26 +17,26 @@ Stay tuned for updates and we appreciate your interest in our work. Please conti
## Repository structure
## Repository structure
The repository includes data folders which you need to prepare. The repository also includes example files from the BEA corpus (Hungarian) and the GRASS corpus (Austrian German) which makes it possible to run an example from scratch. The speech data should ne stored in the folder ```BEAGR``` and should look like this:
The repository includes data folders which you need to prepare. The repository also includes example files from the BEA corpus (Hungarian) and the GRASS corpus (Austrian German) which makes it possible to run an example from scratch. The speech data should ne stored in the folder ```BEAGR``` and should look like this:
- BEAGR/data_BEACS
- BEAGR/data_BEA_CS
- Various speaker (spkID1, spkID2, ...) folders
- Various speaker (spkID1, spkID2, ...) folders
- Various .wav files
- Various .wav or .flac files (fs=16kHz)
- BEAGR/data_BEARS
- BEAGR/data_BEA_RS
- Various speaker (spkID1, spkID2, ...) folders
- Various speaker (spkID1, spkID2, ...) folders
- Various .wav files
- Various .wav or .flac files (fs=16kHz)
- BEAGR/data_GRCS
- BEAGR/data_GR_CS
- Various speaker (spkID1, spkID2, ...) folders
- Various speaker (spkID1, spkID2, ...) folders
- Various .wav files
- Various .wav or .flac files (fs=16kHz)
- BEAGR/data_GRRS
- BEAGR/data_GR_RS
- Various speaker (spkID1, spkID2, ...) folders
- Various speaker (spkID1, spkID2, ...) folders
- Various .wav files
- Various .wav or .flac files (fs=16kHz)
As you can see ```BEAGR``` includes the subfolders ```data_BEACS``` (BEA Spontaneous Speech), ```data_BEARS``` (BEA Read Speech), ```data_GRCS``` (GRASS Conversational Speech) and ```data_GRRS``` (GRASS Read Speech). Make sure that all spkIDs are unique identifiers to prevent ambiguities (e.g., same speakers in RS and CS component?). Given this structure and after installing/preparing all dependencies you should be able to run all stages with the command
As you can see ```BEAGR``` includes the subfolders ```data_BEA_CS``` (BEA Spontaneous Speech), ```data_BEA_RS``` (BEA Read Speech), ```data_GR_CS``` (GRASS Conversational Speech) and ```data_GR_RS``` (GRASS Read Speech). **Please make sure that folders are named like this: ```data_{corpus}_{speakingstyle}```**. Make also sure that all spkIDs are unique identifiers to prevent ambiguities (e.g., same speakers in RS and CS component?). The audio files should have a sampling rate of 16kHZ and can be .wav or .flac files. Given this structure and after installing/preparing all dependencies you should be able to run the experiment. To run a specific stage of the script for a specific dataset, provide the directory where all you data is stored (here ```BEAGR```) and an integer as an argument to the `./run.sh` command. For instance, to run stage ```3``` for the example dataset, you would use the following command:
```
```
./run.sh BEAGR stage
./run.sh BEAGR 3
```
```
where ```stage``` is an integer. The command automatically generates the experiment folder ```exp_BEAGR```.
The command automatically generates the experiment folder ```exp_BEAGR```.
## Reproduction
## Reproduction
The following steps are necessary to reproduce the experiment. At first you need to create a conda envrionment and install the necessary packages. Second you have to clone the fairseq repository and modify the file ```path.sh``` to export necessary environment variables.
The following steps are necessary to reproduce the experiment. At first you need to create a conda envrionment and install the necessary packages. Second you have to clone the fairseq repository and modify the file ```path.sh``` to export necessary environment variables.