Vygon R., Mikhaylovskiy N. (2021) Learning Efficient Representations for Keyword Spotting with Triplet Loss. In: Karpov A., Potapova R. (eds) Speech and Computer. SPECOM 2021. Lecture Notes in Computer Science, vol 12997. Springer, Cham. https://doi.org/10.1007/978-3-030-87802-3_69

In the past few years, triplet loss-based metric embeddings have become a de-facto standard for several important computer vision problems, most notably, person reidentification. On the other hand, in the area of speech recognition the metric embeddings generated by the triplet loss are rarely used even for classification problems. We fill this gap showing that a combination of two representation learning techniques: a triplet loss-based embedding and a variant of kNN for classification instead of cross-entropy loss significantly (by 26% to 38%) improves the classification accuracy for convolutional networks on a LibriSpeech-derived LibriWords datasets. To do so, we propose a novel phonetic similarity based triplet mining approach. We also improve the current best published SOTA (for small-footprint models) for Google Speech Commands dataset V2 10+2-class classification by about 16%, achieving 98.37% accuracy, and the current best published SOTA for 35-class classification on Google Speech Commands dataset V2 by 47%, achieving 97.0% accuracy.