Supplementary material to our article

Self-Attentional Acoustic Models

Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian Stüker, Alex Waibel
Interspeech 2018

Code

We used XNMT as our sequence-to-sequence toolkit. The code for the pure and interleaved models can be found under xnmt/specialized_encoders/self_attentional_am.py. Further code specific to the stacked model variant will be made available soon.

Configuration Files

The configuration files for the pure and interleaved models can be downloaded here. Remaining configuration files will be made available soon.