D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation

Songlin Wei1,4,       Haoran Geng2,3       Congyue Deng1,4       Jiayi Chen1,4,*       Wenbo Cui5,6       Chengyang Zhao1,4       Xiaomeng Fang6,       Leonidas Guibas3       He Wang1,4,6
1 Peking University       2University of California, Berkeley       3Stanford University 4Galbot       5Univsersity of Chinese Academy of Sciences       6Beijing Academy of Artificial Intelligence
8th Conference on Robot Learning (CoRL 2024), Munich, Germany.

Depth Estimation: For transparent and reflective objects

In these demos, we compare the depth predicted by D3RoMa with raw sensor depths captured by RealSense D415/D435.

Abstract

Depth sensing is an important problem for 3D vision-based robotics. Yet, a real-world active stereo or ToF depth camera often produces noisy and in- complete depth which bottlenecks robot performances. In this work, we propose D3RoMa, a learning-based depth estimation framework on stereo image pairs that predicts clean and accurate depth in diverse indoor scenes, even in the most challenging scenarios with translucent or specular surfaces where classical depth sensing completely fails. Key to our method is that we unify depth estimation and restoration into an image-to-image translation problem by predicting the disparity map with a denoising diffusion probabilistic model. At inference time, we further incorporated a left-right consistency constraint as classifier guidance to the diffusion process. Our framework combines recently advanced learning-based approaches and geometric constraints from traditional stereo vision. For model training, we create a large scene-level synthetic dataset with diverse transparent and specular objects to compensate for existing tabletop datasets. The trained model can be directly applied to real-world in-the-wild scenes and achieve state-of-the-art performance in multiple public depth estimation benchmarks. Further experiments in real environments show that accurate depth prediction significantly improves robotic manipulation in various scenarios.

Supplementary Video

Method Overview

NaVid

Disparity diffusion with stereo-geometry guidance.Our disparity diffusion-based depth sensing framework takes the raw disparity map ˜D and the left-right stereo image pair Il, Ir as input. With the geometry prior from the stereo matching between Il and Ir as guidance for the reverse sampling, our diffusion model can gradually perform the denoising process conditioned on ˜D to predict the restored disparity map x0.

In the Wild Depth Predictions

In the wild

Generalizability of D3RoMa in the real world. Our method robustly predicts transparent (bottles) and specular (basin and cups) object depths in tabletop environments and beyond. RGB image, pseudo colorized raw disparity map, our prediction, and point cloud are displayed for each case of a total of 6 frames captured by camera RealSense D415 and D435. * RGB and depth images are not aligned for the D435 camera for better visualization.

Generalization comparisons with State-of-the-art monocular depth estimation methods.

In the wild

Generalization comparisons with State-of-the-art monocular depth estimation methods. All the results except ours are taken from their official web demo. Different methods used different color maps. We found that most monocular methods produce inferior quality depth even without considering the absolute scale.

BibTeX

@inproceedings{
        wei2024droma,
        title={D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation},
        author={Songlin Wei and Haoran Geng and Jiayi Chen and Congyue Deng and Cui Wenbo and Chengyang Zhao and Xiaomeng Fang and Leonidas Guibas and He Wang},
        booktitle={8th Annual Conference on Robot Learning},
        year={2024},
        url={https://openreview.net/forum?id=7E3JAys1xO}
        }