Despite extreme sample inefficiency, on-policy reinforcement learning, aka policy gradients, has become a fundamental tool in decision-making problems. With the recent advances in GPU-driven simulation, the ability to collect large amounts of data for RL training has scaled exponentially. However, we show that current RL methods, e.g. PPO, fail to ingest the benefit of parallelized environments beyond a certain point and their performance saturates. To address this, we propose a new on-policy RL algorithm that can effectively leverage large-scale environments by splitting them into chunks and fusing them back together via importance sampling. Our algorithm, termed SAPG, shows significantly higher performance across a variety of challenging environments where vanilla PPO and other strong baselines fail to achieve high performance.
Fairly complex one involving a 7 DoF arm and a 16 DoF hand. It is challenging because the target pose may require picking up the block in a certain way. The difficulty is further increased by the fact that we need to complete the task consecutively which require in hand adjustment of the grip.
PPO struggles to reach within a tolerable margin while also keeps dropping the block to be manipulated. SAPG, on the other hand, is able to effectively perform the task maneuvers consecutively.
This task is even more complex due to the addition of another 23 DoF arm+hand setup. The difficulty is increased further because some target poses may require transferring the block between the two arms.
Again, PPO has trouble reaching the exact orientation while also keeps dropping the block to be manipulated. SAPG is able to effectively perform the task maneuvers consecutively, robustly transferring the block between the arms when needed.
This is a simpler version of the Two Arms Reorientation task where we need to reach a target position instead of a pose. Here too, PPO performs much worse than SAPG
PCA based analysis
MLP reconstruction based analysis
@inproceedings{sapg2024,
title = {SAPG: Split and Aggregate Policy Gradients},
author = {Singla, Jayesh and Agarwal, Ananye and Pathak, Deepak},
booktitle = {Proceedings of the 41st International Conference on Machine Learning (ICML 2024)},
year = {2024},
series = {Proceedings of Machine Learning Research},
address = {Vienna, Austria},
month = {July},
publisher = {PMLR},
}