WASHINGTON (AP) — Two Air Force fighter jets recently clashed in a dogfight in California. One of them was flown by a pilot. The other wasn’t.
This second plane was piloted by artificial intelligence, with the highest-ranking civilian in the Air Force sitting in the front seat. It was the ultimate demonstration of how far the Air Force has come in developing technology whose roots date back to the 1950s. But it’s only a glimpse of the technology to come.
The United States is compete to stay ahead of China on AI and its use in weapons systems. The focus on AI has sparked public concern that future wars will be fought by machines that select and strike targets without direct human intervention. Officials say it will never happen, at least not on the American side. But questions arise about what a potential adversary would allow, and the military sees no alternative but to rapidly deploy U.S. capabilities.
“Whether you want to call it a race or not, it certainly is,” said Adm. Christopher Grady, vice chairman of the Joint Chiefs of Staff. “We both recognized that this would be a very critical element of the future battlefield. China is working on it as hard as we are.”
A look at the history of military AI development, what technologies are on the horizon and how they will be kept in check:
FROM MACHINE LEARNING TO AUTONOMY
The roots of AI in the military are actually a hybrid of machine learning and autonomy. Machine learning occurs when a computer analyzes data and sets of rules to draw conclusions. Autonomy occurs when these findings are applied to take action without further human intervention.
This took early form in the 1960s and 1970s with the development of the Navy’s Aegis missile defense system. Aegis was trained through a series of sets of human-programmed if/then rules to be able to detect and intercept incoming missiles autonomously and faster than a human could. But the Aegis system was not designed to learn from its decisions, and its reactions were limited to the set of rules it had.
“If a system uses ‘if/then,’ it’s probably not machine learning, which is a field of AI that involves creating systems that learn from data,” said Lt. Air Force Colonel Christopher Berardi, assigned to the Massachusetts Institute. technology to help develop the Air Force’s AI.
AI took a big step forward in 2012 when the combination of big data and advanced computing power allowed computers to start analyzing information and writing the sets of rules themselves. This is what AI experts have called the “big bang” of AI.
The new data created by a computer that writes the rules is artificial intelligence. Systems can be programmed to act autonomously based on conclusions drawn from machine-written rules, which is a form of autonomy enabled by AI.
TESTING AN AI ALTERNATIVE TO GPS NAVIGATION
Air Force Secretary Frank Kendall got a taste of these advanced fights this month when he flew Vista, the first AI-controlled F-16 fighter jetduring an air combat exercise over Edwards Air Force Base in California.
Although this plane is the most visible sign of ongoing AI work, there are hundreds of AI projects underway at the Pentagon.
At MIT, the military worked to erase thousands of hours of recorded pilot conversations to create a data set from the flood of messages exchanged between crews and flight operations centers during flights, so that the AI can learn the difference between critical messages like closing a lead. and banal chatter in the cockpit. The goal was to allow AI to learn which messages are essential to convey to ensure controllers see them more quickly.
In another important project, the military is working on an alternative to AI for navigation dependent on GPS satellites.
In a future war, high-value GPS satellites would likely be hit or disrupted. The loss of GPS could blind U.S. communications, navigation and banking systems and make the U.S. military’s fleet of aircraft and warships less able to coordinate a response.
So last year, the Air Force launched an AI program — loaded onto a laptop attached to the floor of a C-17 military cargo plane — to work on an alternative solution using Earth’s magnetic fields.
It was known that planes could navigate by following the Earth’s magnetic fields, but until now this has not been practical because each plane generates so much of its own electromagnetic noise that there is no effective way to filter only emissions from Earth.
“Magnetometers are very sensitive,” said Col. Garry Floyd, director of the Department of the Air Force-MIT Artificial Intelligence Accelerator Program. “If you turned on the strobe lights on a C-17, we would see it.”
The AI learned through flights and reams of data which signals to ignore and which to follow and the results “were very, very impressive,” Floyd said. “We’re talking about tactical airdrop quality.”
“We think we may have added an arrow to the quiver of things we can do, if we find ourselves in an environment where GPS is denied. Which we will do,” Floyd said.
So far, AI has only been tested on the C-17. Other aircraft will also be tested and, if they work, could give the military another way to operate in the event of a GPS failure.
SAFETY RAILS AND DRIVER’S SPEAK
Vista, the AI-controlled F-16, has considerable guard rails while the Air Force trains it. There are mechanical limits that prevent the still-learning AI from executing maneuvers that would put the plane in danger. There is also a security driver that can take control of the AI with the press of a button.
The algorithm can’t learn during a flight, so each time it only has the data sets and rules it created from previous flights. When a new flight is completed, the algorithm is transferred back to a simulator where it receives new data collected in flight to learn, create new sets of rules and improve its performance.
But AI learns quickly. Because of the super computing speed the AI uses to analyze data and then drive these new sets of rules in the simulator, its speed in finding the most efficient way to fly and maneuver has already led it to beat some human pilots in aerial combat exercises.
But safety remains a major concern, and officials said the most important way to address safety is to control the data that is fed back into the simulator so the AI can learn from it. In the case of the jet, it’s about ensuring the data reflects safe flight. Ultimately, the Air Force hopes that a version of the AI under development could serve as the brains of a fleet of 1,000 unmanned warplanes being developed by General Atomics and Anduril.
As part of the AI training experiment on how pilots communicate, military personnel assigned to MIT cleaned the recordings to remove classified information and sometimes salty language from pilots.
Learning how pilots communicate is “a reflection of command and control, of how pilots think. Machines need to understand that, too, if they want to become really, really good,” said Grady, vice chairman of the Joint Chiefs. “They don’t need to learn to swear.”