Improved YOLOv8n Model for Detecting Helmets and License Plates on Electric Bicycles
作者机构:The College of Computer Science and EngineeringNorth Minzu UniversityYinchuan750021China The Key Laboratory of Images and Graphics Intelligent Processing of State Ethnic Affairs CommissionNorth Minzu UniversityYinchuan750021China
出 版 物:《Computers, Materials & Continua》 (计算机、材料和连续体(英文))
年 卷 期:2024年第80卷第7期
页 面:449-466页
核心收录:
学科分类:08[工学] 080203[工学-机械设计及理论] 0802[工学-机械工程]
基 金:supported by the Ningxia Key Research and Development Program(Talent Introduction Special Project)Project(2022YCZX0013) North Minzu University 2022 School-Level Scientific Research Platform“Digital Agriculture Enabling Ningxia Rural Revitalization Innovation Team”(2022PT_S10) Yinchuan City University-Enterprise Joint Innovation Project(2022XQZD009) Ningxia Key Research and Development Program(Key Project)Project(2023BDE02001)
主 题:YOLOv8 object detection electric bicycle helmet detection electric bicycle license plate detection
摘 要:Wearing helmetswhile riding electric bicycles can significantly reduce head injuries resulting fromtraffic *** effectively monitor compliance,the utilization of target detection algorithms through traffic cameras plays a vital role in identifying helmet usage by electric bicycle riders and recognizing license plates on electric ***,manual enforcement by traffic police is time-consuming and *** methods face challenges in accurately identifying small targets such as helmets and license plates using deep learning *** paper proposes an enhanced model for detecting helmets and license plates on electric bicycles,addressing these *** proposedmodel improves uponYOLOv8n by deepening the network structure,incorporating weighted connections,and introducing lightweight convolutional *** modifications aim to enhance the precision of small target recognition while reducing the model’s parameters,making it suitable for deployment on low-performance devices in real traffic *** results demonstrate that the model achieves an mAP@0.5 of 91.8%,showing an 11.5%improvement over the baselinemodel,with a 16.2%reduction in ***,themodel achieves a frames per second(FPS)rate of 58,meeting the accuracy and speed requirements for detection in actual traffic scenarios.