We work on methods for coordinating many agents in collective autonomous air mobility systems (CAAMS), like urban air mobility and drone firefighting. We specifically aim to encourage emergent cooperation in CAAMS, with consideration of heterogeneity (e.g., individual objectives, capabilities) and the overall integrated system (e.g., vehicles and their interactions with operators and services).

This work is funded in part by NASA 80NSSC23M0221 and ONR N00014-20-1-2249.

Predicting social cost of self-interested agents

High-level view of our proposed approach for understanding the inefficiency of Markov game agents. Our key idea is to estimate the inefficiency of self-interested agents with respect to a social objective using a state-dependent formulation of price of anarchy.

This work explores ways to evaluate and predict the social cost induced by agents acting according to their own selfish objectives. Our approach leverages value functions from policies optimized for the social objective, modeled as a multi-agent Markov decision process (MMDP), and individual agent objectives, modeled as a Markov game (MG), to predict a social cost metric for any given state. See our 2025 AIAA SciTech Forum paper for more details.

DOI

A survey of applications of inverse reinforcement learning in aviation

Our work was motivated by the finding that the use of IRL for aviation has not followed trends seen in other domains like autonomous vehicles. This figure shows the number of IRL papers published across engineering and specifically in aerospace engineering. Data was retrieved from Dimensions, a database of research articles indexed via Crossref, PubMed, PubMed Central, arXiv.org and more than 160 publishers directly. Papers counted here include those containing the key phrase “inverse reinforcement learning.” Aerospace engineering papers were filtered using Dimension’s research area subcategory “Aerospace Engineering.

This work surveys current applications of inverse reinforcement learning (IRL) in aviation. We also identify potential challenges of using IRL for aviation, which may explain its current limited use within the field, and identify potential future applications of IRL for aviation. See our 2025 AIAA SciTech Forum paper for more details.

DOI