ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

A Meta-Learning Approach for User-Defined Spoken Term Classification with Varying Classes and Examples

Yangbin Chen, Tom Ko, Jianping Wang

Recently we formulated a user-defined spoken term classification task as a few-shot learning task and tackled the task using Model-Agnostic Meta-Learning (MAML) algorithm. Our results show that the meta-learning approach performs much better than conventional supervised learning and transfer learning in the task, especially with limited training data. In this paper, we extend our work by addressing a more practical problem in the user-defined scenario where users can define any number of spoken terms and provide any number of enrollment audio examples for each spoken term. From the perspective of few-shot learning, this is an N-way, K-shot problem with varying N and K. In our work, we relax the values of N and K of each meta-task during training instead of assigning fixed values to them, which differs from what most meta-learning algorithms do. We adopt a metric-based meta-learning algorithm named Prototypical Networks (ProtoNet) as it avoids exhaustive fine-tuning when N varies. Furthermore, we use the Max-Mahalanobis Center (MMC) loss as an effective regularizer to address the problem of ProtoNet under the condition of varying K. Experiments on the Google Speech Commands dataset demonstrate that our proposed method outperforms the conventional N-way, K-shot setting in most testing tasks.


doi: 10.21437/Interspeech.2021-147

Cite as: Chen, Y., Ko, T., Wang, J. (2021) A Meta-Learning Approach for User-Defined Spoken Term Classification with Varying Classes and Examples. Proc. Interspeech 2021, 4224-4228, doi: 10.21437/Interspeech.2021-147

@inproceedings{chen21u_interspeech,
  author={Yangbin Chen and Tom Ko and Jianping Wang},
  title={{A Meta-Learning Approach for User-Defined Spoken Term Classification with Varying Classes and Examples}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4224--4228},
  doi={10.21437/Interspeech.2021-147}
}