Autonomy: When Trust Becomes the Real Weapon 
Part 2. ACE’s Demonstration of Human–AI Teaming, and the Challenges Ahead
2026년 01월호 지면기사  / 한상민 기자_han@autoelectronics.co.kr


Dan “Animal” Javorsek
CTO of EpiSci -Applied Intuition

Earlier this year, Dan “Animal” Javorsek, PhD — the CTO of EpiSci, which was acquired by Applied Intuition — joined the company after leading DARPA’s Air Combat Evolution (ACE) program, where he tested human-level combat autonomy. His goal was not simply to deploy AI into combat, but to prove trust at a level where humans and machines can operate together, narrowing the gap between aviation and automotive autonomy. Citing the example of the Automatic Ground Collision Avoidance System (AGCAS), which saved a pilot’s life during a training dogfight, he noted: “The technology existed as early as the 1980s, but it was deployed 30 years later because pilots did not trust it.” In other words, the evolution of trust is harder than the evolution of technology. Javorsek’s ACE story intersects with today’s biggest challenge in autonomous driving — a lack of trust. The technology is already on the road, but society does not fully trust it yet. And that is precisely the mindset with which Applied Intuition approaches defense autonomy.

By Sang Min Han _ han@autoelectronics.co.kr
한글로보기

Defense Autonomy: When SDV Enters the Battlefield (Part 1)





The DARPA Air Combat Evolution (ACE) program, launched in 2019, aimed to achieve trustworthy, scalable, human-level autonomy in air combat. During the “International Seminar on the Advancement of Korean-Style Manned-Unmanned Teaming (MUM-T) Based on Reliable AI” held at the ROK Air Force Hotel on October 28, Javorsek emphasized that the best way to understand autonomy is not through technical data, but through stories that build empathy — and then presented real footage: an F-16 instructor pilot in a live dogfight training scenario.






 
Trust Does Not Come From Code

“The F-16 is a single-seat aircraft. If a pilot blacks out or loses control, the jet becomes uncontrollable instantly, leading to fatal consequences.”
In this case, the pilot attempted to withstand 7.8G but lost consciousness. The aircraft exceeded Mach 1.2 and was diving at a 50-degree angle toward the ground — until AGCAS intervened autonomously, recovering the aircraft and saving the pilot. Although this system could have been implemented in the mid-1980s, it wasn’t deployed until 2014, almost 30 years later.
“This wasn’t a technological limitation — it was a trust problem. Pilots simply didn’t trust the system.”
This became the starting point for the ACE program. When discussing “combat autonomy,” Javorsek argued, the core issue is not technology, but trust. For pilots and systems to operate together, humans must trust the technology.






 
Reassembling the Battlefield Puzzle

The story leads to a new concept the U.S. Air Force explored around 2018–2019: Mosaic Warfare.
Traditionally, battlefield systems are isolated “puzzle pieces” — each aircraft or asset perceives the environment through its own sensors, and pilots or operators make decisions independently. But the future battlespace is too complex and dynamic for such fragmented structures.
“We must reconfigure systems like mosaic tiles that can be flexibly composed. Each tile — weapon, sensor, platform — must retain its purpose while combining in diverse ways to create new tactical effects.”
This concept evolved from effects-based operations and is impossible without autonomy. As a pilot on F-22s, F-16s, and F-35s, Javorsek recognized that it is impossible for a human to manage such complexity alone. The pilot’s role must evolve from “operator” to “mission manager” — commanding multiple unmanned assets and collaborating with autonomous systems.
“We needed a new framework for trust and validation. The ACE program became that proving ground, where we trained AI using the same paradigm humans use — nurturing trust rather than simply programming it.”





 
More Important Than Winning:
Explainability

Dogfight scenarios are the best stage to test autonomy — a microcosm of air combat without engaging the entire battlespace.
DARPA’s AlphaDogfight Trials were launched for that exact purpose — to build trustworthy combat AI through competition between human pilots and AI agents.
“Competition brings out latent capability — for humans and AI alike. The program followed in the footsteps of IBM Deep Blue (chess), Google DeepMind AlphaGo (baduk), and OpenAI Five (Dota 2) — but this one took place in the sky, in real-time combat simulation.”
Originally planned for six months, the program extended to 13 months due to COVID-19, with eight participating teams — including Aurora, Heron Systems, Lockheed Martin, and EpiSci (now part of Applied Intuition). Teams competed using government baseline algorithms, human pilots, and head-to-head AI matches.
But AlphaDogfight was not just a tournament — it was a trust-building process between humans and AI. The program then transitioned to real flight testing using the X-62 Vista testbed.
“ACE explored rule-based systems to model-free, end-to-end reinforcement learning. While E2E approaches performed best in simulation, they struggled in real-world environments due to unpredictability and lack of explainability. Ultimately, all teams converged on hierarchical architectures to ensure explainability and flexibility.”
The next ACE phase was to move beyond simulation into real aircraft. AI that excels in simulation is meaningless if it fails in the real world. EpiSci’s autonomous agent flew on the X-62 Vista, and testing has expanded across aircraft types and into mixed human-machine formations.


 


 
Trust as a Combat Asset

Autonomy began in aviation — the first gyro autopilot emerged in 1912. Automotive cruise control wasn’t commercialized until the 2000s. Aviation led, and automotive followed — until recently, when automotive autonomy overtook aviation.
Javorsek attributes this shift to how the industries defined performance and trust. Aviation historically relied on pilot faith, whereas automotive adopted a measurable metric — miles per disengagement — letting the market quantify trust by how long vehicles could operate safely without human intervention.
“This enabled gradual feature rollout — LKA, ACC, AEB. Trust evolved step-by-step. Drivers learned to trust the tech, and the tech learned from the drivers — a feedback loop.”
Defense, however, cannot afford such gradualism or commercial-scale data accumulation. The ACE program worked to narrow this gap. The goal was not to test a single prototype, but to convert every crewed aircraft into an autonomous asset, accelerating pilot-assist deployment and ultimately enabling fully autonomous fighters.
Success in autonomous combat requires not just technology, but predictability, repeatability, explainability, adaptability, and interoperability. If AI behaves unpredictably in combat — even if it wins — it has failed. Co-evolution of technology and tactics is essential.
“The era of tech and tactics evolving separately is over. Trust is the currency of combat operations. And that is why AI has not yet been fully accepted in the battlespace.”
ACE was built around the idea of AI as an augmentor, not a replacement. AI is not there to replace fighter pilots, but to amplify them. As Javorsek said, AI can turn a young lieutenant into a “Top Gun-level ace in months” through tailored, high-fidelity training.
“The trust we built benefits warfighters, taxpayers, and allies alike. The real success of ACE was not the technology itself — it was the moment humans could trust AI in combat.”

AEM(오토모티브일렉트로닉스매거진)



<저작권자 © AEM. 무단전재 및 재배포 금지>


  • 100자평 쓰기
  • 로그인



TOP