An Innovative Approach Utilizing Binary-View Transformer for Speech Recognition Task
作者机构:COMSATS University IslamabadIslamabad Campus45550Pakistan Suranaree University of TechnologyNakhon Ratchasima30000Thailand COMSATS University IslamabadLahore Campus54000Pakistan COMSATS University IslamabadVehari Campus61100Pakistan National University of Sciences&TechnologyIslamabad45550Pakistan Virtual University of PakistanIslamabad Campus45550Pakistan
出 版 物:《Computers, Materials & Continua》 (计算机、材料和连续体(英文))
年 卷 期:2022年第72卷第9期
页 面:5547-5562页
核心收录:
学科分类:0831[工学-生物医学工程(可授工学、理学、医学学位)] 0808[工学-电气工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 0805[工学-材料科学与工程(可授工学、理学学位)] 0701[理学-数学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 0801[工学-力学(可授工学、理学学位)]
基 金:This research was supported by Suranaree University of Technology Thailand Grant Number:BRO7-709-62-12-03
主 题:Convolution neural network multi-head attention multi-view RNN self-attention speech recognition transformer
摘 要:The deep learning advancements have greatly improved the performance of speech recognition systems,and most recent systems are based on the Recurrent Neural Network(RNN).Overall,the RNN works fine with the small sequence data,but suffers from the gradient vanishing problem in case of large *** transformer networks have neutralized this issue and have shown state-of-the-art results on sequential or speech-related ***,in speech recognition,the input audio is converted into an image using Mel-spectrogram to illustrate frequencies and *** image is classified by the machine learning mechanism to generate a classification ***,the audio frequency in the image has low resolution and causing inaccurate *** paper presents a novel end-to-end binary view transformer-based architecture for speech recognition to cope with the frequency resolution ***,the input audio signal is transformed into a 2D image using ***,the modified universal transformers utilize the multi-head attention to derive contextual information and derive different speech-related ***,a feedforward neural network is also deployed for *** proposed system has generated robust results on Google’s speech command dataset with an accuracy of 95.16%and with minimal *** binary-view transformer eradicates the eventuality of the over-fitting problem by deploying a multiview mechanism to diversify the input data,and multi-head attention captures multiple contexts from the data’s feature map.