Multifingered grasping based on multimodal reinforcement learning

1University of Hamburg 2Tsinghua University

We use reinforcement learning to train dexterous graping policy in the simulation.

Abstract

In this work, we tackle the challenging problem of grasping novel objects using a high-DoF anthropomorphic hand-arm system. Combining fingertip tactile sensing, joint torques and proprioception, a multimodal agent is trained in simulation to learn the finger motions and to determine when to lift an object. Binary contact information and level-based joint torques simplify transferring the learned model to the real robot. To reduce the exploration space, we first generate postural synergies by collecting a dataset covering various grasp types and using principal component analysis. Curriculum learning is further applied to adjust and randomize the initial object pose based on the training performance. Simulation and real robot experiments with dedicated initial grasping poses show that our method outperforms two baseline models in the grasp success rate both for seen and unseen objects. This learning approach further serves as a fundamental technology for complex in-hand manipulations based on multi-sensory the system.

Video

BibTeX

@article{liang2022,
  author  = {Liang, Hongzhuo and Cong, Lin and Hendrich, Norman and Li, Shuang and Sun, Fuchun and Zhang, Jianwei},
  journal = {IEEE Robotics and Automation Letters (RA-L)},
  title   = {Multifingered Grasping Based on Multimodal Reinforcement Learning},
  year    = {2022},
  volume  = {7},
  number  = {2},
  pages   = {1174-1181},
  doi     = {10.1109/LRA.2021.3138545}
}