When a suicide bomber attacked Kabul International Airport in August last year, the death and destruction was overwhelming: The violence left 183 people dead, including 13 U.S. soldiers.
This kind of mass casualty event can be particularly daunting for field workers. Hundreds of people need care, the hospitals nearby have limited room, and decisions on who gets care first and who can wait need to be made quickly. Often, the answer isn’t clear, and people disagree.
The Defense Advanced Research Projects Agency (DARPA) – the innovation arm of the U.S. military – is aiming to answer these thorny questions by outsourcing the decision-making process to artificial intelligence. Through a new program, called In the Moment, it wants to develop technology that would make quick decisions in stressful situations using algorithms and data, arguing that removing human biases may save lives, according to details from the program’s launch this month.
Though the program is in its infancy, it comes as other countries try to update a centuries-old system of medical triage, and as the U.S. military increasingly leans on technology to limit human error in war. But the solution raises red flags among some experts and ethicists who wonder if AI should be involved when lives are at stake.
“AI is great at counting things,” Sally A. Applin, a research fellow and consultant who studies the intersection between people, algorithms and ethics, said in reference to the DARPA program. “But I think it could set a precedent by which the decision for someone’s life is put in the hands of a machine.”
Founded in 1958 by President Dwight D. Eisenhower, DARPA is among the most influential organizations in technology research, spawning projects that have played a role in numerous innovations, including the Internet, GPS, weather satellites and, more recently, Moderna’s coronavirus vaccine.