We work on multi-agent reinforcement learning (MARL) methods to create teams of autonomous agents that can work together to solve challenging tasks, such as search-and-rescue, traffic control, autonomous driving, and warehouse robotics.

Communication-aware learning with mission constraints

This work explores theory and algorithms for autonomous agents to coordinate in a way that satisfies mission requirements with limited communication capabilies.

Improving decentralized training with successor features

This work leverages successor features (SFs) to disentangle an individual agent’s impact on the global value function from that of all other agents, thus enabling better training of decentralized agents. The video shows an example of gameplay from the Starcraft Multi-Agent Challenge (SMAC) where the red agents learned to surround the blue agents and win the game, despite being at a numbers disadvantage (27 vs. 30 agents). See our AAMAS paper presenting our approach, named DISentangling Successor features for Coordination (DISSC), for more details.