咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >MCRNet: Underwater image enhan... 收藏

MCRNet: Underwater image enhancement using multi-color space residual network

作     者:Qin, Ningwei Wu, Junjun Liu, Xilin Lin, Zeqin Wang, Zhifeng 

作者机构:School of Mechatronic Engineering and Automation Foshan University Foshan 528225 China 

出 版 物:《Biomimetic Intelligence and Robotics》 (仿生智能与机器人(英文))

年 卷 期:2024年第4卷第3期

页      面:23-33页

核心收录:

学科分类:0710[理学-生物学] 070207[理学-光学] 08[工学] 0836[工学-生物工程] 0803[工学-光学工程] 0702[理学-物理学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:This work was supported in part by the National Key R&D Program of China ( 2022YFB4702300 )  in part by the National Natural Science Foundation of China ( 62273097 )  in part by the Guangdong Basic and Applied Basic Research Foundation ( 2022A1515140044   2019A1515110304   2020A1515110255   and 2021B1515120017 )  in part by the Research Foundation of Universities of Guangdong Province ( 2019KZDZX1026   2020KCXTD015   and 2021KCXTD083 )  in part by the Foshan Key Area Technology Research Foundation ( 2120001011009 )  and in part by the Guangdong Philosophy and Social Science Program ( GD23XTS03 ) 

主  题:Image enhancement 

摘      要:The selective attenuation and scattering of light in underwater environments cause color distortion and contrast reduction in underwater images, which can impede the ever-growing demand for underwater robot operations. To address these issues, we propose a Multi-Color space Residual Network (MCRNet) for underwater image enhancement. Our method takes advantage of the unique features of color representation in the RGB, HSV, and Lab color spaces. By utilizing the distinct feature representations of images in different color spaces, we can highlight and fuse the most informative features of the three color spaces. Our approach employs a self-attention mechanism in the multi-color space feature fusion module. Extensive experiments demonstrate that our method achieves satisfactory results in color correction and contrast improvement of underwater images, particularly in severely degraded scenes. Consequently, our method outperforms state-of-the-art methods in both subjective visual comparison and objective evaluation metrics. © 2024 The Author(s)

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分