1Galbot
2Peking University
3The University of Hong Kong
4Beijing Academy of Artificial Intelligence
† corresponding author
Embodied foundation models are gaining increasing attention for their zero-shot generalization, scalability, and adaptability to new tasks through few-shot post-training. However, existing models rely heavily on real-world data, which is costly and labor-intensive to collect. Synthetic data offers a cost-effective alternative, yet its potential remains largely underexplored. To bridge this gap, we explore the feasibility of training Vision-Language-Action (VLA) models entirely with large-scale synthetic action data. We curate SynGrasp-1B, a billion-frame robotic grasping dataset generated in simulation with photorealistic rendering and extensive domain randomization. Building on this, we present GraspVLA, a VLA model pretrained on large-scale synthetic action data as a foundational model for grasping tasks. GraspVLA integrates autoregressive perception tasks and flow-matching-based action generation into a unified Chain-of-Thought process, enabling joint training on synthetic action data and Internet semantics data. This design helps mitigate sim-to-real gaps and facilitates the transfer of learned actions to a broader range of Internet-covered objects, achieving open-vocabulary generalization in grasping. Extensive evaluations across real-world and simulation benchmarks demonstrate GraspVLA's advanced zero-shot generalizability and few-shot adaptability to specific human preferences. We will release SynGrasp-1B dataset and pre-trained weights to benefit the community.
Pre-trained on our billion-scale dataset, GraspVLA demonstrates strong zero-shot generalizability across 6 aspects, including distractor, spatial pose, category, lighting, background, and close-loop actions.
Cluttered scenes with 30+ distractors.
Dynamic distractors.
Various lighting conditions. In the second video, even when the object is moved to a new location under dark lighting conditions, GraspVLA can still track and grasp it.
Balls at different heights.
Eggs with different planar poses.
Table with different textures.
Wall with changing colors.
Co-trained with grounding tasks using Internet data, GraspVLA can generalize to novel categories without any action label.
GraspVLA can automatically make closed-loop adjustments in response to disturbances until the task is completed.
In industrial scenarios, although our pre-trained GraspVLA can grasp any part, it struggles to identify parts with special names.
A few data with only bounding box annotation helps GraspVLA master all the rare parts.
In retail scenario, with a few trajectories on one kind of bottles, GraspVLA learns to sequentially pick up bottles from a densely-packed environment. This behavior can also be transferred to bottles unseen in post-training.
GraspVLA learns to grasp mugs with specific grasping pose preference from a few demonstrations, and generalize to new mugs.
We introduce SynGrasp-1B, a billion-frame grasping dataset in simulation, featuring photorealistic rendering and extensive domain randomization, including initial robot pose, object pose, background, lighting, and material.
Data generation pipeline: We select over 10,000 object meshes from 240 categories in Objaverse and randomly place them on the table (left). We then use BoDex to generate stable grasps for each object and use CuRobo to plan grasping trajectories (middle). Finally, we apply domain randomization to materials, lighting, camera views, and backgrounds to simulate and render the trajectories (right).
GraspVLA consists of an autoregressive vision-language backbone and a flow-matching based action expert. It exploits the synergy between Internet grounding data and synthetic action data with a Progressive Action Generation mechanism: the model first predicts 2D bounding boxes of the target object for both synthetic data and web data, and additionally generates grasp pose and chunked actions for synthetic data.
If you have any questions, please feel free to contact Shengliang Deng at sldeng@cs.hku.hk and Mi Yan at dorisyan@pku.edu.cn.