Silicon Valley is closer to the world’s militaries than ever. And it’s not just big companies, either – start-ups are getting a look in as well.
The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield. Those with the most to gain are start-ups such as Palantir, which are hoping to cash in as militaries race to update their arsenals with the latest technologies. But long-standing ethical concerns over the use of AI in warfare have become more urgent as the technology becomes more and more advanced, while the prospect of restrictions and regulations governing its use looks as remote as ever.
On 30 June 2022, NATO announced it is creating a $1 billion innovation fund that will invest in early-stage start-ups and venture capital funds developing “priority” technologies such as artificial intelligence, big-data processing, and automation.
The Chinese military likely spends at least $1.6 billion a year on AI, according to a report by the Georgetown Centre for Security and Emerging Technologies, and in the US there is already a significant push underway to reach parity, says Lauren Kahn, a research fellow at the Council on Foreign Relations. The US Department of Defence requested $874 million for artificial intelligence for 2022, although that figure does not reflect the total of the department’s AI investments, it said in a March 2022 report.
It’s not just the US military that’s convinced of the need. European countries, which tend to be more cautious about adopting new technologies, are also spending more money on AI, says Heiko Borchert, co-director of the Defence AI Observatory at the Helmut Schmidt University in Hamburg, Germany.
The French and the British have identified AI as a key defence technology, and the European Commission, the EU’s executive arm, has earmarked $1 billion to develop new defence technologies.
Since the war started, the UK has launched a new AI strategy specifically for defence, and the Germans have earmarked just under half a billion for research and artificial intelligence within a $100 billion cash injection to the military.
In a vaguely worded press release in 2021, the British army proudly announced it had used AI in a military operation for the first time, to provide information on the surrounding environment and terrain. The US is working with start-ups to develop autonomous military vehicles. In the future, swarms of hundreds or even thousands of autonomous drones that the US and British militaries are developing could prove to be powerful and lethal weapons.
Many experts are worried. Meredith Whittaker, a senior advisor on AI at the Federal Trade Commission and a faculty director at the AI Now Institute, says this push is really more about enriching tech companies than improving military operations.
In a piece for Prospect magazine co-written with Lucy Suchman, a sociology professor at Lancaster University, she argued that AI boosters are stoking Cold War rhetoric and trying to create a narrative that positions Big Tech as “critical national infrastructure,” too big and important to break up or regulate. They warn that AI adoption by the military is being presented as an inevitability rather than what it really is: an active choice that involves ethical complexities and trade-offs.
Despite the steady march of AI into the field of battle, the ethical concerns that prompted the protests around Project Maven haven’t gone away. The Pentagon’s Project Maven was an attempt to build image recognition systems to improve drone strikes. Google pulled out of Project Maven in 2018 following employee protests and outrage.