The writer is founder of Sifted, an FT-backed media company covering European start-ups
Stick artificial intelligence in the same sentence as war and the mind’s eye conjures up swarms of killer robot drones buzzing over battlefields zapping enemy soldiers as in some futuristic horror movie. But the reality of the war in Ukraine has highlighted how machine learning systems are already transforming the conduct of battle in far less graphic, yet highly important, ways, facilitating data processing and other “back office” functions and tilting the military advantage towards Kyiv.
They have shown how a clever and courageous military armed with smart software can outfight a far bigger enemy overly dependent on dumb hardware. They also raise fresh questions, though, about how far critical national security capabilities should be outsourced to private companies that are largely unaccountable for the technology’s use.
In Ukraine, US companies such as Microsoft, Starlink and Palantir have been critical in boosting the Ukrainian war effort. Microsoft has provided more than $100mn of technological assistance to Kyiv to harden the country’s digital infrastructure and secure data on the cloud. Starlink, the satellite internet company run by Elon Musk’s SpaceX, has provided safe communications for Ukraine’s frontline forces. And the data company Palantir has helped optimise Ukraine’s digital “kill chain” by providing actionable intelligence drawn from satellite imagery, battlefield sensors and other open sources.
Alex Karp, Palantir’s chief executive, is clear about the lessons that should be learnt. If you go into battle using old-school technology you will still be at a “massive disadvantage” against an adversary that uses digital targeting and AI, he told me at the REAIM conference in The Hague this week. “The country that wins the AI battles of the future will set the international order.”
The attendees at the responsible AI conference, hosted by the Dutch and South Korean governments, generally welcomed the role that the technology was playing in helping Ukraine defend itself against Russia’s brutal invasion. They also saw AI as a critical force multiplier for maintaining democratic societies’ defences against other autocratic rivals, such as China. But the broader unease about the widespread adoption of lethal AI systems was palpable. There was a sense that the conduct of war is rapidly entering radically new territory and we have no road map to the future.
One of the complicating factors is the unprecedented role that private sector companies are playing. Palantir’s Karp argued that his company fully accepted that elected politicians and military officers should always remain in control of the technology’s use. But he acknowledged that was not a universal view in Silicon Valley. That point has been highlighted by Musk’s decision to scale back Ukraine’s access to Starlink’s network for fear of escalating the conflict, generating a fierce row with Kyiv. “Musk now has his own defence policy,” as one conference attendee said.
One of the broader principles about AI that is easy to enunciate, and near-impossible to enact, is that humans should always remain in control of machines. Algorithms should never be allowed to make life-and-death decisions by themselves. Just how humans always remain “in the loop” when AI systems are widely used at lightning speed in diffuse ways in times of war is not clear. Nevertheless, governments should still push for accepted norms to frame the use of military AI even if they fall short of agreeing binding international treaties.
As Agnès Callamard, the secretary-general of Amnesty International, argued, the ability of AI systems to help military officers make more reliable and precise decisions does not necessarily make those decisions right. For example, precision-guided missiles can be used to reduce civilian casualties — or precisely the reverse: to target hospitals or medical personnel, as the Russians have done in Syria and Ukraine. “Reliability does not mean compliance with international law. Precision does not mean compliance,” she told the conference. “Wars are dirty.”
Lethal autonomous weapons systems can empower bad actors, as well as the good ones who attend responsible AI conferences. Self-righteous democracies can suffer from what Stuart Russell, a computer science professor at University of California, Berkeley, calls the “sole ownership fallacy”. As with attempts to limit the use of biological, chemical and nuclear weapons, governments should conclude it is in their collective self-interest to restrict the terrible things they can do to others so that those terrible things are not also inflicted on themselves.