Option Encoder: A Framework for Discovering a Policy Basis in Reinforcement Learning

Published in "ECML PKDD 2020. Lecture Notes in Computer Science, vol 12458"
Arjun Manoharan , Rahul Ramesh , B Ravindran

Option discovery and skill acquisition frameworks are integral to the functioning of a hierarchically organized Reinforcement learning agent. However, such techniques often yield a large number of options or skills, which can be represented succinctly by filtering out any redundant information. Such a reduction can decrease the required computation while also improving the performance on a target task. To compress an array of option policies, we attempt to find a policy basis that accurately captures the set of all options. In this work, we propose Option Encoder, an auto-encoder based framework with intelligently constrained weights, that helps discover a collection of basis policies. The policy basis can be used as a proxy for the original set of skills in a suitable hierarchically organized framework. We demonstrate the efficacy of our method on a collection of grid-worlds evaluating the obtained policy basis on downstream tasks and demonstrate qualitative results on the Deepmind-lab task.