Open-vocabulary 3D instance segmentation is cutting-edge for its ability to segment 3D instances without predefined categories. However, progress in 3D lags behind its 2D counterpart due to limited annotated 3D data. To address this, recent works first generate 2D open-vocabulary masks through 2D models and then merge them into 3D instances based on metrics calculated between two neighboring frames. In contrast to these local metrics, we propose a novel metric, view consensus rate, to enhance the utilization of multi-view observations. The key insight is that two 2D masks should be deemed part of the same 3D instance if a significant number of other 2D masks from different views contain both these two masks. Using this metric as edge weight, we construct a global mask graph where each mask is a node. Through iterative clustering of masks showing high view consensus, we generate a series of clusters, each representing a distinct 3D instance. Notably, our model is training-free. Through extensive experiments on publicly available datasets, including ScanNet++, ScanNet200 and MatterPort3D, we demonstrate that our method achieves state-of-the-art performance in open-vocabulary 3D instance segmentation.
Left click to rotate, right click to translate, scroll wheel to zoom.
Click the buttons below to select views.
Left click to rotate, right click to translate, scroll wheel to zoom.
Click the buttons below to select views.
Left click to rotate, right click to translate, scroll wheel to zoom.
Click the buttons below to select views.
Left click to rotate, right click to translate, scroll wheel to zoom.
Click the buttons below to select views.
Left click to rotate, right click to translate, scroll wheel to zoom.
Click the buttons below to select views.
Left click to rotate, right click to translate, scroll wheel to zoom.
Click the buttons below to select views.
Left click to rotate, right click to translate, scroll wheel to zoom.
Click the buttons below to select views.
We integrate our algorithm with Gaussian splatting and test it on a real-world medicine shelf. Using Gaussian centers as the reconstructed point cloud, we establish pixel-point correspondence based on rendering weight, effectively segmenting all medicine bottles and boxes. This underscores the value of our work in offline semantic mapping for AR/VR and robotics.
OVIR-3D: Open-Vocabulary 3D Instance Retrieval Without Training on 3D Data
Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance
SAI3D: Segment Any Instance in 3D with Open Vocabularies
SAM-guided Graph Cut for 3D Instance Segmentation
If you have any questions, please feel free to contact Mi Yan at dorisyan@pku.edu.cn, and He Wang at hewang@pku.edu.cn.