Zephyrnet Logo

US military’s cybersecurity capabilities to get OpenAI boost

Date:

OpenAI is developing AI-powered cybersecurity capabilities for the US military, and shifting its election security work into high gear, the lab’s execs told the World Economic Forum (WEF) in Davos this week.

The public about-face on working with the armed forces comes days after a change in OpenAI’s policy language, which previously prohibited using its generative AI models for “military and warfare” applications, as well as “the generation of malware,” with its technology. Those restraints have now disappeared from the ChatGPT maker’s fine print. That said, the super lab stressed that its technology still isn’t supposed to be used for violence, destruction, or communications espionage.

“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” an OpenAI spokesperson told The Register today.

“There are, however, national security use cases that align with our mission.

“We are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.” 

On Tuesday, during an interview at the WEF shindig for the leaders of the world, OpenAI VP of Global Affairs Anna Makanju said its partnership with the Pentagon includes developing open source cybersecurity software. OpenAI is also starting talks with the US government on how its technology can help prevent veteran suicides, she said.

“Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,” Makanju said. 

However, despite removing “military and warfare” along with other “disallowed usages” for ChatGPT, Makanju said OpenAI maintains its ban on using its models to develop weapons to hurt people. 

Also during the same interview, OpenAI CEO Sam Altman said the biz is taking steps to ensure its generative AI tools aren’t used to spread election-related disinformation. 

It also follows a similar push by Microsoft, OpenAI’s largest investor, which, in November announced a five-step election protection strategy for “the United States and other countries where critical elections will take place in 2024.” 

“There’s a lot at stake in this election,” Altman said on Tuesday.

This comes a day after former US president Donald Trump’s big win in the Iowa caucus on Monday.

And all of these topics — AI, cybersecurity, and disinformation — play prominent roles on the agenda as world leaders meet this week in Davos.

According to the WEF’s Global Risks Report 2024, published last week, “misinformation and disinformation” is the top short-term global risk, with “cyber insecurity” coming in at number four.

The rise of generative AI exacerbates these challenges, with 56 percent of executives surveyed at the WEF’s Annual Meeting of Cybersecurity in November 2023 saying generative AI will give attackers and advantage over defenders within the next two years.

“Particular concern surrounds the use of AI technologies to boost cyber warfare capabilities, with good reason,” Bernard Montel, EMEA technical director at Tenable, told The Register

“While AI has made astronomical technological advancements in the last 12 to 24 months, allowing an autonomous device to make the final judgment is incomprehensible today,” he added.

“While AI is capable of quickly identifying and automating some actions that need to be taken, it’s imperative that humans are the ones making critical decisions on where and when to act from the intelligence AI provides.” ®

spot_img

Latest Intelligence

spot_img