Our project titled “Learning from Existing Systems for Emergent Coordination in Future Air Mobility” is selected for funding by NASA’s Transformational Tools and Technologies (TTT) program! Advanced Air Mobility (AAM) missions will require a wide range of use cases, such as urban air mobility (UAM) and firefighting, where vehicles, operators, and services coordinate with each other to satisfy mission objectives. Distributed autonomy, where autonomous vehicles rely on vehicle-to-vehicle communications to coordinate, is a key enabler for this vision, as the desired scale of operations will surpass capabilities of current operational architectures, which require centralized coordination and significant human oversight. This project aims to create methods for dynamic coordination of collective autonomous air mobility systems (CAMS), by drawing insights from existing systems in other domains involving mobility and human-autonomy interactions. Our key idea is to use a hybrid approach that leverages: (1) multi-agent inverse reinforcement learning (MA-IRL) to extract general mechanisms for emergent behavior from eCAS, and (2) multi-agent reinforcement learning (MARL) to incorporate those mechanisms into algorithms for dynamic coordination in CAMS.
This project is a collaboration with Dr. Max Li from the University of Michigan.