咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Implementation of Rapid Code T... 收藏

Implementation of Rapid Code Transformation Process Using Deep Learning Approaches

作     者:Bao Rong Chang Hsiu-Fen Tsai Han-Lin Chou 

作者机构:Department of Computer Science and Information EngineeringNational University of KaohsiungKaohsiung811Taiwan Department of Fragrance and Cosmetic ScienceKaohsiung Medical UniversityKaohsiung811Taiwan 

出 版 物:《Computer Modeling in Engineering & Sciences》 (工程与科学中的计算机建模(英文))

年 卷 期:2023年第136卷第7期

页      面:107-134页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:supported by the Ministry of Science and Technology Taiwan under Grant Nos.MOST 111-2221-E-390-012 and MOST 111-2622-E-390-001 

主  题:Code transformation model variational simhash piecewise longest common subsequence explainable AI LIME 

摘      要:Our previous work has introduced the newly generated program using the code transformation model GPT-2,verifying the generated programming codes through simhash(SH)and longest common subsequence(LCS)***,the entire code transformation process has encountered a time-consuming ***,the objective of this study is to speed up the code transformation process signi􀀀*** paper has proposed deep learning approaches for modifying SH using a variational simhash(VSH)algorithm and replacing LCS with a piecewise longest common subsequence(PLCS)algorithm to faster the veri􀀀cation process in the test *** the code transformation model GPT-2,this study has also introduced MicrosoMASS and Facebook BART for a comparative analysis of their ***,the explainable AI technique using local interpretable model-agnostic explanations(LIME)can also interpret the decision-making *** experimental results show that VSH can reduce the number of quali􀀀ed programs by 22.11%,and PLCS can reduce the execution time of selected pocket programs by 32.39%.As a result,the proposed approaches can signi􀀀cantly speed up the entire code transformation process by 1.38 times on average compared with our previous work.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分