PuzzleNet:Boundary-Aware Feature Matching for Non-Overlapping 3D Point Clouds Assembly
作者机构:State Key Laboratory of Multimodal Artificial Intelligence SystemsInstitute of AutomationChinese Academy of Sciences Beijing 100190China School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijing 100049China
出 版 物:《Journal of Computer Science & Technology》 (计算机科学技术学报(英文版))
年 卷 期:2023年第38卷第3期
页 面:492-509页
核心收录:
学科分类:08[工学] 080203[工学-机械设计及理论] 0802[工学-机械工程]
基 金:supported by the National Natural Science Foundation of China under Grant Nos.U22B2034,62172416,U21A20515,62172415,62271467 the Youth Innovation Promotion Association of the Chinese Academy of Sciences under Grant No.2022131
主 题:shape assembly 3D registration geometric learning boundary feature point cloud
摘 要:We address the 3D shape assembly of multiple geometric pieces without overlaps, a scenario often encountered in 3D shape design, field archeology, and robotics. Existing methods depend on strong assumptions on the number of shape pieces and coherent geometry or semantics of shape pieces. Despite raising attention to 3D registration with complex or low overlapping patterns, few methods consider shape assembly with rare overlaps. To address this problem, we present a novel framework inspired by solving puzzles, named PuzzleNet, which conducts multi-task learning by leveraging both 3D alignment and boundary information. Specifically, we design an end-to-end neural network based on a point cloud transformer with two-way branches for estimating rigid transformation and predicting boundaries simultaneously. The framework is then naturally extended to reassemble multiple pieces into a full shape by using an iterative greedy approach based on the distance between each pair of candidate-matched pieces. To train and evaluate PuzzleNet, we construct two datasets, named ModelPuzzle and DublinPuzzle, based on a real-world urban scan dataset (DublinCity) and a synthetic CAD dataset (ModelNet40) respectively. Experiments demonstrate our effectiveness in solving 3D shape assembly for multiple pieces with arbitrary geometry and inconsistent semantics. Our method surpasses state-of-the-art algorithms by more than 10 times in rotation metrics and four times in translation metrics.