Zephyrnet Logo

OpenAI’s products are not as secure as you might expect

Date:

OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. Known for its cutting-edge advancements in artificial intelligence, OpenAI has now been thrust into the spotlight due to not one but two significant security breaches.

These incidents have raised questions about the company’s data handling and cybersecurity protocols, shaking the confidence of both users and industry experts.

ChatGPT for Mac is full of security flaws

The Mac app for ChatGPT has been a popular tool for users looking to leverage OpenAI’s powerful language model, GPT-4o, directly from their desktops. However, this week, a security flaw was revealed.

Pedro José Pereira Vieito, a Swift developer, discovered that the app was storing user conversations in plain text locally on the device. This means that any sensitive information shared during these conversations was not protected, making it accessible to other applications or potential malware.

Vieito’s findings were quickly picked up by tech news outlet The Verge, amplifying the concern among users and prompting OpenAI to take swift action. The company released an update that introduced encryption for the locally stored chats, addressing the immediate security concern.

The absence of sandboxing, a security practice that isolates applications to prevent vulnerabilities from affecting other parts of the system, further complicated the issue. Since the ChatGPT app is not available on the Apple App Store, it does not have to comply with Apple’s sandboxing requirements.

The fact that such a basic security oversight occurred in the first place has raised questions about OpenAI’s internal security practices and the thoroughness of their app development process.

This loophole allowed the app to store data insecurely, exposing users to potential risks.

The quick fix by OpenAI has mitigated the immediate threat, but it has also highlighted the need for more stringent security measures in the development and deployment of AI applications.

A hacker’s playbook

The second security issue facing OpenAI is rooted in an incident from last spring, but its repercussions are still felt today.

Early last year, a hacker managed to breach OpenAI’s internal messaging systems, gaining access to sensitive information about the company’s operations and AI technologies. The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its AI.

OpenAI executives revealed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023. However, the decision was made not to disclose the breach publicly as no customer or partner information was stolen, and the executives did not see it as a threat to national security. They believed the hacker was a private individual without ties to any foreign government, so they did not inform the FBI or law enforcement.

OpenAI security flaws
The ChatGPT Mac app was previously found storing user conversations in plain text, posing a security risk (Image credit)

This decision led to internal concerns about OpenAI’s security posture. Some employees worried that foreign adversaries, such as China, could potentially exploit similar vulnerabilities to steal AI technology that might eventually pose a threat to U.S. national security.

Leopold Aschenbrenner, an OpenAI technical program manager, argued that the company was not taking enough measures to prevent such threats. He sent a memo to the board of directors outlining his concerns but was later fired, a move he claims was politically motivated due to his whistleblowing.

Whispered worries

The internal breach and subsequent handling of the incident have exposed deeper fractures within OpenAI regarding security practices and transparency. Aschenbrenner’s dismissal, which he claims was a result of his whistleblowing, has sparked debate about the company’s commitment to security and how it addresses internal dissent. While OpenAI maintains that his termination was unrelated to his whistleblowing, the situation has highlighted tensions within the company.

The breach and its aftermath have also underscored the potential geopolitical risks associated with advanced AI technologies. The fear that AI secrets could be stolen by foreign adversaries like China is not unfounded. Similar concerns have been raised in the tech industry, notably by Microsoft President Brad Smith, who testified about Chinese hackers exploiting tech systems to attack federal networks.

Despite these concerns, federal and state laws prevent companies like OpenAI from discriminating based on nationality. Experts argue that excluding foreign talent could hinder AI development in the U.S. OpenAI’s head of security, Matt Knight, emphasized the need to balance these risks while leveraging the best talent worldwide to advance AI technologies.

OpenAI security flaws
Back in 2023, a hacker breached OpenAI’s internal messaging systems last spring, gaining access to sensitive information (Image credit)

What’s OpenAI’s power play?

In response to these incidents, OpenAI has taken steps to bolster its security measures. The company has established a Safety and Security Committee to evaluate and mitigate the risks associated with future technologies. This committee includes notable figures such as Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command, and has been appointed to the OpenAI board of directors.

OpenAI’s commitment to security is further evidenced by its ongoing investments in safeguarding its technologies. Knight highlighted that these efforts began years before the introduction of ChatGPT and continue to evolve as the company seeks to understand and address emerging risks. Despite these proactive measures, the recent incidents have shown that the journey to robust security is ongoing and requires constant vigilance.


Featured image credit: Kim Menikh/Unsplash

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?