Speaker Accent Recognition Using MFCC Feature Extraction and Machine Learning Algorithms


Creative Commons License

Ayrancı A. A., ATAY S., YILDIRIM T.

International journal of advances in engineering and pure sciences (Online), cilt.33, sa.0, ss.17-27, 2021 (Hakemli Dergi) identifier

Özet

Speech and speaker recognition systems aim to analyze parametric information contained in the human voice and recognize it at the highest possible rate. One of the most important features in the audio signal for the speaker to be recognized successfully by the system is the speaker's accent. Speaker accent recognition systems are based on the analysis of patterns such as the way the speaker speaks and the word choice he uses while speaking. In this study, the data obtained by the MFCC feature extraction technique from voice signals of 367 speakers with 7 different accents were used. The data of 330 speakers in the data set were taken from the "Speaker Accent Recognition" data set in the UC Irvine Machine Learning (ML) open data source. The data of the other 37 speakers were obtained by converting the voice recordings in the "Speaker Accent Archive" data set created by George Mason University into data using the MFCC feature extraction technique. 9 ML classification algorithms were used for the designed speaker accent recognition system. Also, the k-fold cross-validation technique was used to test the data set independently. In this way, the performance of ML algorithms is shown when the data set is divided into a k number of parts. Information about the classification algorithms used in the designed system and the hyperparameter optimizations made in these algorithms are also given. The success performances of the classification algorithms are shown with performance metrics.