Imagination Augmented Agents for Deep Reinforcement Learning

From statwiki
Jump to navigation Jump to search

Introduction

Model-free reinforcement learning, where raw observations directly map to values or actions, has been successfully applied to a wide range of domains. However, this approach usually requires large amounts of training data and the resulting policies do not readily generalize to novel tasks in the same environment, as it lacks the behavioral flexibility constitutive of general intelligence. Model-based RL aims to address these shortcomings by endowing agents with a model of the world, synthesized from past experience, yet in complex domains for which a simulator is not available to the agents, the performance of model-based agents employing standard planning methods usually suffers from model errors resulting from function approximation. These errors compound during planning, causing over-optimism and poor agent performance.

The paper seeks to address this shortcoming by proposing Imagination-Augmented Agents, which use approximate environment models by "learning to interpret" their imperfect predictions. Without making any assumptions about the structure of the environment model and its possible imperfections, the approach learns in an end-to-end way to extract useful knowledge gathered from model simulations. This allows the agent to benefit from model-based imagination without the pitfalls of conventional model-based planning.

The I2A (Imagination-Augmented Agents) Architecture

(Not Finished)

Experiments

(Not Finished)

Conclusions

(Not Finished)

Comments

(Not Finished)

References

(Not Finished)