咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Rotation,Translation and Scale... 收藏

Rotation,Translation and Scale Invariant Sign Word Recognition Using Deep Learning

作     者:Abu Saleh Musa Miah Jungpil Shin Md.Al Mehedi Hasan Md Abdur Rahim Yuichi Okuyama 

作者机构:School of Computer Science and EngineeringThe University of AizuAizuwakamatsuFukushima965-8580Japan Department of Computer Science and EngineeringPabna University of Science and TechnologyPabnaBangladesh 

出 版 物:《Computer Systems Science & Engineering》 (计算机系统科学与工程(英文))

年 卷 期:2023年第44卷第3期

页      面:2521-2536页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:This work was supported by the Competitive Research Fund of The University of Aizu Japan 

主  题:Sign word recognition convolution neural network(cnn) rotation translation and scaling(rts) otsu segmentation 

摘      要:Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious *** of the main functions of sign language is to communicate with each other through hand *** of hand gestures has become an important challenge for the recognition of sign *** are many existing models that can produce a good accuracy,but if the model test with rotated or translated images,they may face some difficulties to make good performance *** resolve these challenges of hand gesture recognition,we proposed a Rotation,Translation and Scale-invariant sign word recognition system using a convolu-tional neural network(CNN).We have followed three steps in our work:rotated,translated and scaled(RTS)version dataset generation,gesture segmentation,and sign word ***,we have enlarged a benchmark dataset of 20 sign words by making different amounts of Rotation,Translation and Scale of the ori-ginal images to create the RTS version *** we have applied the gesture segmentation *** segmentation consists of three levels,i)Otsu Thresholding with YCbCr,ii)Morphological analysis:dilation through opening morphology and iii)Watershed ***,our designed CNN model has been trained to classify the hand gesture as well as the sign *** model has been evaluated using the twenty sign word dataset,five sign word dataset and the RTS version of these *** achieved 99.30%accuracy from the twenty sign word dataset evaluation,99.10%accuracy from the RTS version of the twenty sign word evolution,100%accuracy from thefive sign word dataset evaluation,and 98.00%accuracy from the RTS versionfive sign word dataset ***,the influence of our model exists in competitive results with state-of-the-art methods in sign word recognition.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分