咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Deep reinforcement learning fo... 收藏

Deep reinforcement learning for home energy management system control

作     者:Paulo Lissa Conor Deane Michael Schukat Federico Seri Marcus Keane Enda Barrett 

作者机构:College of Science and EngineeringNational University of IrelandGalwayIreland Informatics Research Unit for Sustainable Engineering(IRUSE)GalwayIreland Ryan InstituteNational University of Ireland GalwayIreland 

出 版 物:《Energy and AI》 (能源与人工智能(英文))

年 卷 期:2021年第3卷第1期

页      面:64-72页

核心收录:

学科分类:0202[经济学-应用经济学] 02[经济学] 020205[经济学-产业经济学] 

基  金:This research work was funded by the European Union under the RESPOND project with Grant agreement No.768619 

主  题:Deep reinforcement learning Residential home energy management Demand response Autonomous control 

摘      要:The use of machine learning techniques has been proven to be a viable solution for smart home energy *** techniques autonomously control heating and domestic hot water systems,which are the most relevant loads in a dwelling,helping consumers to reduce energy consumption and also improving their ***,the number of houses equipped with renewable energy resources is increasing,and this is a key ele-ment for energy usage optimization,where coordinating loads and production can bring additional savings and reduce peak *** this regard,we propose the development of a deep reinforcement learning(DRL)algorithm for indoor and domestic hot water temperature control,aiming to reduce energy consumption by optimizing the usage of PV energy ***,a methodology for a new dynamic indoor temperature setpoint definition is presented,thus allowing greater flexibility and *** results show that the proposed DRL al-gorithm combined with the dynamic setpoint achieved on average 8%of energy savings compared to a rule-based algorithm,reaching up to 16%of savings over the summer ***,the users’comfort has not been compromised,as the algorithm is calibrated to not exceed more than 1%of the time out the specified temperature *** analysis shows that further savings could be achieved if the time out of comfort is increased,which could be agreed according to users’*** demand side management,the DRL control shows efficiency by anticipating and delaying actions for a PV self-consumption optimization,performing over 10%of load ***,the renewable energy consumption is 9.5%higher for the DRL-based model compared to the rule-based,which means less energy consumed from the grid.

读者评论 与其他读者分享你的观点