Laplacian using Abstract State Transition Graphs: A Framework for Skill Acquisition
- Matheus R. F. Mendonça ,
- Artur Ziviani ,
- André da Motta Salles Barreto
2019 Brazilian Conference on Intelligent Systems |
Published by IEEE
Automatic definition of macro-actions for Reinforcement Learning (RL) is a way of breaking a large problem into smaller sub-problems. Macro-actions are known to boost the agent’s learning process, leading to a better performance. One recent approach, called Laplacian Framework, uses the Proto-Value Functions of the State Transition Graph (STG) associated with a RL problem in order to create options. For larger problems, however, the STG is unavailable. In this context, we propose an improvement upon the Laplacian Framework for large problems, called Laplacian using Abstract State Transition Graphs (LAST-G), which uses an Abstract State Transition Graph (ASTG), a reduced version of the original STG. This approach allows the creation of intra-policies for the discovered options by using the ASTG as a model of the environment. Our experimental results show that the proposed framework is capable of: (i) effectively creating purposeful options; and (ii) successfully executing the identified options.