Distill, Diffuse, and Semanticize (DDS): Annotation-Free 3D Scene Understanding Based on Multi-Granularity Distillation and Graph-Diffusion-Based Segmentation
arXiv:2605.08293v1 Announce Type: new Abstract: 3D semantic scene understanding has broad applications in digital twins, autonomous driving, smart agriculture, and embodied perception. However, dense point-wise annotation for point clouds is extremely expensive, making fully supervised 3D semantic learning difficult to scale. Recent annotation-free methods can discover semantic regions without manual 3D labels, but they often suffer from weak object-level consistency, inefficient global grouping, and category-agnostic segmented regions. We propose an annotation-free 3D scene semantic understanding method based on multi-granularity distillation and graph-diffusion-based segmentation. The proposed method first leverages structured visual knowledge guidance and superpoint graph diffusion to perform efficient global semantic propagation, alleviating the problem of inconsistent region-level semantics. It then conducts semantic inference through segmentation-cluster association, assigning interpretable category names to segmented 3D regions and improving the overall effectiveness of annotation-free 3D semantic understanding. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed framework. Compared with the advanced existing annotation-free baselines, our method improves oAcc, mAcc, and mIoU by 5.9%, 8.1%, and 2.4% at most, respectively. These results highlight the promise of the proposed framework for scalable annotation-free 3D scene understanding, especially in real-world scenarios requiring both object segmentation and semantic recognition.
