咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Automatic Speaker Recognition ... 收藏

Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning

作     者:U˘gur Ayvaz Hüseyin Gürüler Faheem Khan Naveed Ahmed Taegkeun Whangbo Abdusalomov Akmalbek Bobomirzaevich 

作者机构:Department of Computer EngineeringIstanbul Technical UniversityIstanbul34485Turkey Department of Information Systems EngineeringMugla Sitki Kocman UniversityMugla48000Turkey Artificial Intelligence LabDepartment of Computer EngineeringGachon UniversitySeongnam13557Korea Department of Computer ScienceCollege of Computing and InformaticsUniversity of SharjahSharjah27272UAE 

出 版 物:《Computers, Materials & Continua》 (计算机、材料和连续体(英文))

年 卷 期:2022年第71卷第6期

页      面:5511-5521页

核心收录:

学科分类:07[理学] 0804[工学-仪器科学与技术] 0701[理学-数学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 070101[理学-基础数学] 

基  金:This work was supported by the GRRC program of Gyeonggi province.[GRRC-Gachon2020(B04) Development of AI-based Healthcare Devices] 

主  题:Automatic speaker recognition human voice recognition spatial pattern recognition MFCCs spectrogram machine learning artificial intelligence 

摘      要:Automatic speaker recognition(ASR)systems are the field of Human-machine interaction and scientists have been using feature extraction and feature matching methods to analyze and synthesize these *** of the most commonly used methods for feature extraction is Mel Frequency Cepstral Coefficients(MFCCs).Recent researches show that MFCCs are successful in processing the voice signal with high *** represents a sequence of voice signal-specific *** experimental analysis is proposed to distinguish Turkish speakers by extracting the MFCCs from the speech *** the human perception of sound is not linear,after the filterbank step in theMFCC method,we converted the obtained log filterbanks into decibel(dB)features-based spectrograms without applying the Discrete Cosine Transform(DCT).A new dataset was created with converted spectrogram into a 2-D *** learning algorithms were implementedwith a 10-fold cross-validationmethod to detect the *** highest accuracy of 90.2%was achieved using Multi-layer Perceptron(MLP)with tanh activation *** most important output of this study is the inclusion of human voice as a new feature set.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分