
Artificial Intelligence, News, Technology
A Pilot and An AI Went Head-to-Head in a Virtual Dogfight. The AI Prevailed
Heron Systems’ artificial intelligence algorithm triumphed over a U.S. Air Force pilot 5-0 in DARPA’s AlphaDogfight Trials
It’s not wise to engage with an AI in a dogfight.
It was evident by the third and final leg of the United States Defense Advanced Research Projects Agency’s (DARPA’s) AlphaDogfight trials that a human was no match for Heron System’s AI algorithm in virtual aerial combat.
The trials took place virtually on August 20 as several contractors locked horns with one another in a round-robin tournament. Aerospace giant Lockheed Martin took on Heron Systems in the semi-finals, each contractor in command of a simulated Lockheed Martin F-16 Fighting Falcon.
Heron System’s AI consistently vanquished its contemporaries before facing its final adversary, a U.S. Air Force pilot in the conclusive round, eventually emerging victorious in what would come to be a lopsided dogfight. The AI won 5-0.
The Maryland-based tech firm notes the algorithm used deep reinforcement learning, a category of machine learning that draws insight from behavioural psychology and enables intelligent machines to learn in a similar way humans learn via experience.

(Image: Pixabay)
The privately held company has conquered the DARPA trials, and quite possibly investor attention as well through its success in the Air Combat Evolution (ACE) program.
The prime objective of ACE is to ‘to increase trust in combat autonomy by using human-machine collaborative dogfighting as its challenge problem’, thereby paving a path to complex collaboration between human and machine in aerial combat.
Through the deployment of unmanned aerial vehicles (UAVs) developed by AlphaDogfight contractors, USAF pilots would have access to a fleet of UAVs at their disposal, able to perform manoeuvres and defend themselves while carrying out military operations, hypothesises ACE.
The ACE is fixated upon the development of a hierarchical framework in autonomy wherein higher-level cognitive functions (such as developing an overall engagement strategy, selecting and prioritizing targets, determining best weapon or effect, et al.) are performed by humans whereas lower-level functions such as aircraft manoeuver characteristics and engagement can be delegated to AI counterparts.
Col. Daniel Javorsek postulated that the true debilitating potential of dogfighting AI is achieved when adversaries in the airspace are illuded into perceiving that unmanned systems pose a greater threat.

(Image: Pixabay)
“The more that we can enable our unmanned systems to behave, look and act like intelligent, creative entities, the more that causes problems for our adversaries,” noted Javorsek.
While Heron System’s AI algorithm outperformed USAF pilot ‘Banger’ in a simulated environment, there’s simply not enough evidence to suggest that the former will have an edge over adversaries in the battlefield.
Naval War College Associate Professor Jacquelyn Schneider argued that the simulated environment made significant deviations from a real-world dogfight.
“On the experiment: y’all it was a pilot w/a VR headset & a fake stick. AI beat a human pilot at a video game. It isn’t surprising that AI performs well in a simulated environment & that human advantages (the warm fuzzy) are less important,” tweeted Schneider.
AI Arms Race: If Weapons Are Here, Could War Be Far Behind?

A U.S. Air Force RQ-4 Global Hawk unmanned aircraft soars over uninhabited land. (Image: Wikimedia Commons)
The advances in autonomous combat and reconnaissance aircraft, including AI-enabled weapons, promise a dramatically greater reach, speed, precision and lethality of military operations in the near future.
Major military powers including the United States, the Russian Federation and the People’s Republic of China are in hot pursuit to acquire these capabilities in the shortest span of time possible, and the little consensus, if not the lack thereof, on the subject of the ethical and legal implications has rekindled the conversation about the dawning of the era of weaponized AI.
The global superpowers are consumed by their pursuit of securing the military high ground, and none so concerned about finding the achievable common ground.
A diplomat is an artist whose canvas is a façade, and the brush is the rhetoric of equivocation.
Outside the walls of the Palace of Nations in Geneva, Beijing joined the ensemble of the 28 states in standing against autonomous weapons, pledging allegiance to the Stop Killer Robots campaign, yet inside, however, never raised their voice against their development or mass production. An eerie paradox, yet a masterful gambit.
The Pentagon adopted a set of ethical principles for artificial intelligence in February, committing to ‘responsible and lawful behaviour’, yet the White House along with the Whitehall are opposed to killer robot ban.
In July 2015, Professor Stephen W. Hawking, Elon Musk, Steve Wozniak alongside the founders of DeepMind, Demis Hassabis, Shane Legg and Mustafa Suleyman became signatories to an open letter urging the governments of the world to abandon the pursuit of lethal AI weapons.
“In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” states an excerpt from the letter.
The world leaders are weaving an uncertain future with AI. In the next decade, it may be scrubbing your floor, or chasing you down with an automatic rifle.
For the ones chasing the dream of commanding AI on the battlefield, Edward A. Murphy might have a grim reminder:
“Anything that can go wrong will go wrong.”
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.
Follow Us