Posted by Roger Mallett Posted on 1 September 2020

Military AI vanquishes human fighter pilot in F-16 simulation. How scared should we be?

Artificial intelligence can master difficult combat skills at warp speed, but the Pentagon’s futurists must remain mindful of its limitations and risks.

From the outside, the simulated aerial dogfight the Pentagon held two weeks ago looked like a standard demonstration of close-up air-to-air combat as two F-16 fighter jets barreled through the sky, twisting and diving as they sought an advantage over the other. Time and time again the jets would “merge,” with one or both pilots having just split seconds to pull off an accurate shot. After one of the jets found itself riddled with cannon shells five times in these confrontations, the simulation ended.

From the inside, things seemed very, very different.

“The standard things we’re trained to do as a fighter pilot aren’t working,” lamented the losing pilot, an Air Force fighter pilot instructor with the call sign Banger.

That’s because this wasn’t a typical simulation at all. Instead, the U.S. military’s emerging-technologies research arm, the Defense Advanced Research Projects Agency, had staged a matchup between man and machine — and the machine won 5-0.

Indeed, the victor was an artificial intelligence-directed “pilot” developed by Heron Systems. It quickly put the lie to a statement DARPA made just one year ago, “No AI currently exists … that can outduel a human strapped into a fighter jet in a high-speed, high-G dogfight.”

The AlphaDogfight simulation on Aug. 20 was an important milestone for AI and its potential military uses. While this achievement shows that AI can master increasingly difficult combat skills at warp speed, the Pentagon’s futurists still must remain mindful of its limitations and risks — both because AI remains long away from eclipsing the human mind in many critical decision-making roles, despite what the likes of Elon Musk have warned, and to make sure we don’t race ahead of ourselves and inadvertently leave the military exposed to new threats.

That’s not to minimize this latest development. Within the scope of the simulation, the AI pilot exceeded human limitations in the tournament: It was able to consistently execute accurate shots in very short timeframes; consistently push the airframe’s tolerance of the force of gravity to its maximum potential without going beyond that; and remain unaffected by the crushing pressure exerted by violent maneuvers the way a human pilot would.

All the more remarkable, Heron’s AI pilot was self-taught using deep reinforcement learning, a method in which an AI runs a combat simulation over and over again and is “rewarded” for rapidly successful behaviors and “punished” for failure. Initially, the AI agent is simply learning not to fly its aircraft into the ground. But after 4 billion iterations, Heron seems to have mastered the art of executing energy-efficient air combat maneuvers.

The Alpha Dogfight confirmed something we already knew in our gut: Given sufficiently good algorithms, AI can outperform most humans in making rapid and precise calculations.

Human pilots could perhaps devise tactics designed to exploit the Heron AI’s limitations, just as Banger did with temporary success in the final round of the competition. But, like the Borg in “Star Trek,” the AI-powered pilot may, in turn, eventually learn from its failures and adapt. (The machine-learning algorithm was disabled during the tournament.)

From our advertisers