ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Minimization of Regression and Ranking Losses with Shallow Neural Networks on Automatic Sincerity Evaluation

Hung-Shin Lee, Yu Tsao, Chi-Chun Lee, Hsin-Min Wang, Wei-Cheng Lin, Wei-Chen Chen, Shan-Wen Hsiao, Shyh-Kang Jeng

To estimate the degree of sincerity conveyed by a speech utterance and received by listeners, we propose an instance-based learning framework with shallow neural networks. The framework plays as not only a regressor that intends to fit the predicted value to the actual value but also a ranker that preserves the relative target magnitude between each pair of utterances, in an attempt to derive a higher Spearman’s rank correlation coefficient. In addition to describing how to simultaneously minimize regression and ranking losses, the issue of how utterance pairs work in the training and evaluation phases is also addressed by two kinds of realizations. The intuitive one is related to random sampling while the other seeks for representative utterances, named anchors, to form non-stochastic pairs. Our system outperforms the baseline by more than 25% relative improvement in the development set.


doi: 10.21437/Interspeech.2016-756

Cite as: Lee, H.-S., Tsao, Y., Lee, C.-C., Wang, H.-M., Lin, W.-C., Chen, W.-C., Hsiao, S.-W., Jeng, S.-K. (2016) Minimization of Regression and Ranking Losses with Shallow Neural Networks on Automatic Sincerity Evaluation. Proc. Interspeech 2016, 2031-2035, doi: 10.21437/Interspeech.2016-756

@inproceedings{lee16c_interspeech,
  author={Hung-Shin Lee and Yu Tsao and Chi-Chun Lee and Hsin-Min Wang and Wei-Cheng Lin and Wei-Chen Chen and Shan-Wen Hsiao and Shyh-Kang Jeng},
  title={{Minimization of Regression and Ranking Losses with Shallow Neural Networks on Automatic Sincerity Evaluation}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={2031--2035},
  doi={10.21437/Interspeech.2016-756}
}