Abstract:
The use of Electroencephalography (EEG) in the domain of Brain Computer Interface is a now common place. EEG for imagined speech reproduction and observation of brain res...Show MoreMetadata
Abstract:
The use of Electroencephalography (EEG) in the domain of Brain Computer Interface is a now common place. EEG for imagined speech reproduction and observation of brain response to audio stimuli are active areas of research. In this paper, we consider the case of imagined and mouthed non-audible speech recorded with EEG electrodes. We analyze different feature extraction techniques such as Mel Frequency Cepstral Coefficients (MFCCs), log variance Auto Regressive (AR) coefficients. Based on these extracted features, we perform a pairwise classification of vowels using three different classification models based on Support Vector Machine (SVM), Hidden Markov Models (HMM) and k-nn classifier. The proposed methodology is applied on four different data sets with some preprocessing techniques such as Common Spatial Pattern (CSP) filtering. The data sets principally comprised of either mouthing or solely imagining 5 vowel sounds without speaking or making any muscle movement. The goal of this study is to perform an inter comparison of different classification models and associated features for pairwise vowel imagery. The proposed approach is validated on different data sets and offer reasonable accuracies for pairwise classification.
Date of Conference: 15-17 November 2014
Date Added to IEEE Xplore: 15 January 2015
ISBN Information: