Zephyrnet Logo

AI regulators fear getting drowned out by hype of wars

Date:

BERLIN — A fighter jet hurtles toward an adversary head-on. Mere moments before a collision, it swerves — but not before dealing a lethal blow to its opponent.

This risky maneuver would be reckless even for the most skilled pilot. But to artificial intelligence, such a simulation scenario showcases one of the most effective dogfighting techniques, scoring kill rates of nearly 100% against human pilots.

In a warfighting revolution turbocharged by the conflict in Ukraine, autonomous decision-making is quickly reshaping modern combat, experts told Defense News in a series of interviews.

Weapons that can decide for themselves whom or what to target — and even when to kill — are entering military arsenals. They have experts worried that an uncontrolled arms race is emerging, and that warfare could become so fast-paced that humans cannot keep up.

It is the speed, in particular, that may prove a “slippery slope,” said Natasha Bajema, a senior research associate at the James Martin Center for Nonproliferation Studies, a nongovernmental organization. As the speed of conflict increases with greater autonomy on the battlefield, the incentives to delegate even more functions to the machines could become ever stronger.

“Do we really think that in the middle of a battle between China and the U.S., someone is going to say: ‘Hold on, we can’t let the machine do that’?” Bajema asked, referring to the allure of what she described as war moving at machine speed.

“It’s the most competitive race for advantage that we’ve seen since the race for nuclear weapons,” she added.

The appetite for more autonomy in weapons, fanned by combat in Ukraine and Gaza, has drowned out long-standing calls for limits on AI in military applications. But they still exist.

Ambassador Alexander Kmentt, the director of the Disarmament, Arms Control and Non-Proliferation Department of the Austrian Foreign Ministry, called the scenario of trigger-pulling, AI-enabled robots a true “Oppenheimer moment,” a reference to the birth of the atomic bomb in the 1940s.

Austria has been leading an international push to bring governments from around the world to the table to draft the rules of war for a new era.

In late April, the country’s government hosted the first global conference on autonomous weapon systems in Vienna’s grand Hofburg Palace. Kmentt said it exceeded his expectations.

“At times during the preparations, I was concerned about attendance, that the room would be half empty,” the ambassador recalled in an interview with Defense News. Instead, there were more than 1,000 delegates from 144 countries present in Vienna.

“Even those states that used to see the topic as some sort of sci-fi now perceive it as being incredibly timely,” he said.

Much of the Global South — a term sometimes used to couple countries that reject the hierarchy of world politics — now seems interested in restricting the technology, according to Kmentt, though little could be achieved without buy-in from the major global powers.

Unintended consequences

For all their military appeal, AI-enabled weapons come with the flaws of a technology still in its infancy. Machine vision, in particular, is still too prone to errors, said Zachary Kallenborn, lead researcher at Looking Glass USA, a consultancy that deals with questions surrounding advanced weapons systems.

“A single pixel is enough to confuse a bomber with a dog, a civilian with a combatant,” he said.

In the coming years, experts expect to see an accumulating number of autonomous weapons on the battlefield with increasingly sophisticated abilities. Even without technological mishaps, this might lead to a heightened risk of misunderstanding.

The disposable nature of drones, for example, could lead to more aggressive or risky behaviors, said Bajema. Intercepting an autonomous system would likely elicit a different reaction among adversaries than downing a crewed plane, she said, but where precisely the line falls is hard to determine.

The race toward AI is governed by what she called the “terminator problem” — if one state has it, all believe they need it to feel secure — an environment that makes regulating the technology so difficult.

Moreover, today’s geopolitical climate is not very amenable to multilateral arms control, she added.

Given those odds, Kmentt said he is merely looking for a compromise.

“It’s clear that there will be no universal consensus on the topic,” he noted. “There’s hardly any issue where this exists, and certain countries seem to have no interest in developing international law. So we have to accept that, and instead work together with those countries that are interested in developing these rules.”

But he admitted to being somewhat pessimistic about the chances of success.

“These weapons will majorly define the future of armed conflicts, and as a result the voices of militaries worldwide who want these weapons will become louder and louder,” Kmentt predicted.

For now, the target date of 2026 looms large for the community of AI nonproliferation advocates; it refers to the United Nations’ mandate of setting “clear prohibitions and restrictions on autonomous weapon systems,” in the words of U.N. Secretary-General António Guterres.

“So far, there is insufficient political will to make something happen due to the difficult geopolitical situation,” Kmentt said.

The 2026 target is not an arbitrary date, he added. “If we haven’t succeeded with anything by then, the window for preventive action has closed.”

Linus Höller is a Europe correspondent for Defense News. He covers international security and military developments across the continent. Linus holds a degree in journalism, political science and international studies, and is currently pursuing a master’s in nonproliferation and terrorism studies.

spot_img

Latest Intelligence

spot_img