咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Collision Observation-Based Op... 收藏

Collision Observation-Based Optimization of Low-Power and Lossy IoT Network Using Reinforcement Learning

作     者:Arslan Musaddiq Rashid Ali Jin-Ghoo Choi Byung-Seo Kim Sung-Won Kim 

作者机构:Department of Information and Communication EngineeringYeungnam UniversityGyeongsan-si8541South Korea School of Intelligent Mechatronics EngineeringSejong UniversitySeoul05006South Korea Department of Software and Communications EngineeringHongik UniversitySeoul30016South Korea 

出 版 物:《Computers, Materials & Continua》 (计算机、材料和连续体(英文))

年 卷 期:2021年第67卷第4期

页      面:799-814页

核心收录:

学科分类:0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 

基  金:supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government.(No.2018R1A2B6002399) 

主  题:Internet of Things RPL MAC protocols reinforcement learning Q-learning 

摘      要:The Internet of Things(IoT)has numerous applications in every domain,e.g.,smart cities to provide intelligent services to sustainable *** next-generation of IoT networks is expected to be densely deployed in a resource-constrained and lossy *** densely deployed nodes producing radically heterogeneous traffic pattern causes congestion and collision in the *** the medium access control(MAC)layer,mitigating channel collision is still one of the main challenges of future IoT ***,the standardized network layer uses a ranking mechanism based on hop-counts and expected transmission counts(ETX),which often does not adapt to the dynamic and lossy environment and impact *** ranking mechanism also requires large control overheads to update rank *** resource-constrained IoT devices operating in a low-power and lossy network(LLN)environment need an efficient solution to handle these *** learning(RL)algorithms like Q-learning are recently utilized to solve learning problems in LLNs devices like ***,in this paper,an RL-based optimization of dense LLN IoT devices with heavy heterogeneous traffic is *** proposed protocol learns the collision information from the MAC layer and makes an intelligent decision at the network *** proposed protocol also enhances the operation of the trickle timer algorithm.A Q-learning model is employed to adaptively learn the channel collision probability and network layer ranking states with accumulated reward *** on a simulation using Contiki 3.0 Cooja,the proposed intelligent scheme achieves a lower packet loss ratio,improves throughput,produces lower control overheads,and consumes less energy than other state-of-the-art mechanisms.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分