Enhanced 3D Point Cloud Reconstruction for Light Field Microscopy Using U-Net-Based Convolutional Neural Networks
作者机构:Department of Information and Communication EngineeringChungbuk National UniversityCheongju-siChungcheongbuk-do28644Korea Department of Electronic EngineeringUniversity of SuwonHwaseong-siGyeonggi-do18323Korea
出 版 物:《Computer Systems Science & Engineering》 (计算机系统科学与工程(英文))
年 卷 期:2023年第47卷第12期
页 面:2921-2937页
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported by the National Research Foundation of Korea (NRF) (NRF-2018R1D1A3B07044041&NRF-2020R1A2C1101258) supported by the MSIT (Ministry of Science and ICT),Korea,under the ITRC (Information Technology Research Center)Support Program (IITP-2023-2020-0-01846)was conducted during the research year of Chungbuk National University in 2023
主 题:3Dreconstruction 3Dmodeling point cloud depth estimation integral imaging light filedmicroscopy 3D-CNN U-Net deep learning machine intelligence
摘 要:This article describes a novel approach for enhancing the three-dimensional(3D)point cloud reconstruction for light field microscopy(LFM)using U-net architecture-based fully convolutional neural network(CNN).Since the directional view of the LFM is limited,noise and artifacts make it difficult to reconstruct the exact shape of 3D point *** existing methods suffer from these problems due to the self-occlusion of the *** manuscript proposes a deep fusion learning(DL)method that combines a 3D CNN with a U-Net-based model as a feature *** sub-aperture images obtained from the light field microscopy are aligned to form a light field data cube for preprocessing.A multi-stream 3D CNNs and U-net architecture are applied to obtain the depth feature fromthe directional sub-aperture LF data *** the enhancement of the depthmap,dual iteration-based weighted median filtering(WMF)is used to reduce surface noise and enhance the accuracy of the *** a 3D point cloud involves combining two key elements:the enhanced depth map and the central view of the light field *** proposed method is validated using synthesized Heidelberg Collaboratory for Image Processing(HCI)and real-world LFM *** results are compared with different state-of-the-art *** structural similarity index(SSIM)gain for boxes,cotton,pillow,and pens are 0.9760,0.9806,0.9940,and 0.9907,***,the discrete entropy(DE)value for LFM depth maps exhibited better performance than other existing methods.