Multi-Domain Systems: A Deeper Dive

Multi-Domain Systems

Our MDS team is developing new theories, algorithms, and models to support multi-domain operations. Our technology addresses a wide range of applications, from logistics to tactics and strategy to operations in the ground, air, maritime, space, and cyber domains. We are building the foundation for innovative research in the next generation of autonomous control theory and complex adaptive systems.

Fully Autonomous Swarms

Our autonomy group develops distributed Command and Control (C2) systems for fully autonomous swarms. We leverage Reinforcement Learning, Transfer Learning, Neuro-Symbolic Reasoning, Emergent Behavior with Localized Sensing, and Biologically-based Models to develop swarm tactics. Our tools automatically compose best-of-class tactics for complex scenarios.

Projects include:

  • ASC-SIM: Autonomous System Control via Social Insect Models
  • GAIN: Generalizing Agent Intelligence for Command and Control
  • ULTRA: Urban Logistics Testing and Reasoning Application
  • MFA: Mission Focused Autonomy

Simulation to Deployment Loop

We develop operationally-relevant and robust simulation environments and scenarios to: (1) support developing and training software agents; (2) test and evaluate emerging AI, ML, and other technologies; and (3) provide early verification and validation (V&V) to accelerate operational deployment. For example, we use the industry-standard Robot Operating System (ROS) with cutting-edge reinforcement learning techniques to create a flexible, customizable, future-proofed AI training environments capable of producing AI agents ready for transfer into real-world systems. We use a variety of on-line gaming environments, coupled with reinforcement learning, to develop optimal tactical plans for multi-domain environments; these environments allow our bots to play against humans and against other bots.

Projects include:

  • The CURE: The Counter-UAS Reinforcement learning Environment
  • OpTL: Operational Transfer Learning
  • MUDCRANE: Multi-domain C2 RL Training Environment
  • ARLES: Adaptive Reinforcement Learning Environment for Simulation
  • MARSHAL: Mission Asset Recommendation System with Historical Ability Learning
  • CASES: Cyber APT Scenarios for Enterprise-level Systems
  • SMAC: Scenarios for Mission Assurance in the Clouds

Imposing Complexity and Deception

We develop graph-based mathematical models that capture military decision making. We measure complexity in the graph model for different scenarios and options, and map those to decision space complexity. We explore the vulnerabilities and defensive options of networks and AI/ML algorithms existing in those networks.

Projects include:

  • LINKS: Lattice, Network, and Semantic Analysis for COA Complexity Imposition
  • C2AIE: Command and Control for Artificial Intelligence Effects
  • GRND: Graph Node Role Dynamics
  • OpTC: Operationally Transparent Cyber
  • PRIMO: Policy and Resiliency Impact on Mission Outcome