We work on hierarchical reinforcement learning (HRL) methods to create autonomous agents that can efficiently solve complex tasks.

This work was funded in part by DARPA FA8750-18C-0137 and ONR N00014-20-1-2249.

Bioinspired discovery of hierarchical subtasks

Value maps for two learned sub-policies. The object-based option is learned using features specialized to focus on dynamic objects - it appears to have the blue agent prevent the enemy agent (E) from reaching the blue flag (blue star). The spatial-based option is learning using spatial features and is independent of the enemy.

Inspired by recent discoveries in neuroscience, this work explores the use of feature specialization and clustering to decompose environments into a useful task subspace for HRL. See our ALA Workshop at AAMAS 2022 paper presenting our approach, named Specialized Neurons and Clustering (SNAC), for more details.

Adaptation in HRL

This work explores the benefits of using hierarchical architectures in RL for adaptation to new dynamics. The video shows an example of how our approach supports fast adaptation to new enemies in a game of capture-the-flag. See our ICRA 2022 paper for more details.