SAPG: Split and Aggregate Policy Gradients

Jayesh Singla *                     Ananye Agarwal *                     Deepak Pathak
Carnegie Mellon University ICML 2024

Abstract

Despite extreme sample inefficiency, on-policy reinforcement learning, aka policy gradients, has become a fundamental tool in decision-making problems. With the recent advances in GPU-driven simulation, the ability to collect large amounts of data for RL training has scaled exponentially. However, we show that current RL methods, e.g. PPO, fail to ingest the benefit of parallelized environments beyond a certain point and their performance saturates. To address this, we propose a new on-policy RL algorithm that can effectively leverage large-scale environments by splitting them into chunks and fusing them back together via importance sampling. Our algorithm, termed SAPG, shows significantly higher performance across a variety of challenging environments where vanilla PPO and other strong baselines fail to achieve high performance. Our code will be open-sourced upon acceptance.



Performance on Complex Robotic Environments

Our method is able to beat the existing state-of-the-art methods on a variety of challenging robotic tasks in simulation. We observe faster policy improvement as well as a higher asymptotic performance over 5 different hard environments.


Qualitative performance

Qualitatively, we observe that the policy learned by our method exhibits a high success rate in challenging environments.

Allegro Kuka Regrasping

Allegro Kuka Reorientation

Measuring diversity

We also quantitatively analyze the diversity of states visited by our method during training using two different methods. The first involves using PCA and plotting reconstruction error against number of PCA components used. The second method involves training an smaller MLP to reconstruct inputs and evaluating training error on batches of states from different methods.

PCA based analysis


MLP reconstruction based analysis


BibTex

 @inproceedings{sapg2024,
  title     = {SAPG: Split and Aggregate Policy Gradients},
  author    = {Singla, Jayesh and Agarwal, Ananye and Pathak, Deepak},
  booktitle = {Proceedings of the 41st International Conference on Machine Learning (ICML 2024)},
  year      = {2024},
  series    = {Proceedings of Machine Learning Research},
  address   = {Vienna, Austria},
  month     = {July},
  publisher = {PMLR},
}