PartManip: Learning Cross-Category Generalizable Part Manipulation Policy from Point Cloud Observations

CVPR 2023

Haoran Geng1, 2*    Ziming Li1,2*     Yiran Geng1, 2    Jiayi Chen1,3    Hao Dong1,2     He Wang1,2†   

1CFCS, Peking University    2School of EECS, Peking University    3Beijing Academy of Artificial Intelligence   

* equal contributions   corresponding author  


We introduce a large-scale cross-category part manipulation benchmark PartManip with diverse object datasets, realistic settings, and rich annotations. We propose a generalizable vision-based policy learning strategy and boost the performance of part-based object manipulation by a large margin, which can generalize to unseen object categories and novel objects in the real world.


Learning a generalizable object manipulation policy is vital for an embodied agent to work in complex real-world scenes. Parts, as the shared components in different object categories, have the potential to increase the generaliza- tion ability of the manipulation policy and achieve cross- category object manipulation. In this work, we build the first large-scale, part-based cross-category object manip- ulation benchmark, PartManip, which is composed of 11 object categories, 494 objects, and 1432 tasks in 6 task classes. Compared to previous work, our benchmark is also more diverse and realistic, i.e., having more objects and using sparse-view point cloud as input without oracle information like part segmentation. To tackle the difficul- ties of vision-based policy learning, we first train a state- based expert with our proposed part-based canonicaliza- tion and part-aware rewards, and then distill the knowledge to a vision-based student. We also find an expressive back- bone is essential to overcome the large diversity of different objects. For cross-category generalization, we introduce domain adversarial learning for domain-invariant feature extraction. Extensive experiments in simulation show that our learned policy can outperform other methods by a large margin, especially on unseen object categories. We also demonstrate our method can successfully manipulate novel objects in the real world.


Full pipeline


An Overview of Our Domain-generalizable Part Segmentation and Pose Estimation Method. We introduce a part-oriented domain adversarial training strategy that can tackle multi-resolution features and distribution imbalance for the domain-invariant GAPart feature extraction. The training strategy tackles the challenges in our tasks and dataset, significantly improving the generalizability of our method for part segmentation and pose estimation.


If you have any questions, please feel free to contact us:

  • Haoran Geng: ghrPrevent spamming@Prevent
  • He Wang: hewangPrevent spamming@Prevent