Zephyrnet Logo

Decentralized AI Offers New Hope for User Data Security

Date:

One of the biggest risks with the increased use of AI chatbots like ChatGPT is the emergence of new threats to user data. But some companies are starting to build decentralized AI systems that they hope will make personal data leaks more difficult. 

AI startup Elna is one such entity. The India-based company allows people to create customized AI chatbots on the blockchain and at the same time remain “decentralized, transparent” and free to use.

Also read: Will Decentralized Digital ID Curb AI Threats in Web3 and Metaverse? 

Safety on the blockchain

“Elna prioritizes user data protection through the use of Internet Computer Protocol (ICP) canisters [or smart contracts], which are decentralized containers controlled directly by the user’s wallet,” Elna co-founder and CEO Arun PM told MetaNews.

“This framework ensures complete ownership, transparency, and control over data by the user,” he said.

ICP is a blockchain network – the same kind of tech underpinning Bitcoin – that aims to improve efficiency and speed to decentralized data storage.

A major concern with centralized and closed AI platforms such as OpenAI (which apparently began life as a non-profit aiming to build open-source models) or Google, is vulnerability of personal data. In such a set-up, data is typically kept on company servers, often with limited control from the individual.

Arum PM says the use of smart contracts in artificial intelligence fixes this problem by helping to safeguard privacy and user data, like trading information and chat history, via decentralized storage. This way, people have control over their own information, he adds.

Arun says Elna does not “store or utilize” user data. He states that the model “inherently aligns” with the EU’s tough General Data Protection Regulation (GDPR) and other global laws on privacy. Instead, users store their own data using smart contract-based wallets.

“By leveraging blockchain tech, Elna ensures that the integrity and privacy of user data are maintained,” Arun tells us, adding that the platform collects only the data it needs to operate optimally.

“We adopt best practices in data minimization and offer users the tools to manage their data effectively,” he said.

Decentralized AI Offers New Hope for User Data Security 
Source: Elna AI

What is Elna AI?

Launched in August 2023, Elna self-describes as a “community-driven decentralized AI agent creation platform.” Unlike other AI chatbots, where operators choose the training data, users of Elna can upload their own data to the network and train AI models that focus on topics and applications that interest them.

Once that is done, Elna then deploys your artificial intelligence assistant or AI agent onto the ICP blockchain, where users can continuously expand their AI’s knowledge by adding new data.

Elna is powered by what the firm calls “canister smart contracts” which are responsible for training AI models and deployment. It also utilizes something known as a “vector database” for storing information on-chain.

People use their wallets to log into the system, and for governance they use the ELNA token while Elixir is a utility token, both built on the Internet Computer Protocol.

Decentralized AI Offers New Hope for User Data Security 
An avatar created on Elna. Source: Elna AI

ChatGPT data leaks

Big Tech have failed to secure user data in the past. Centralized entities, like Facebook, have proven that they play fast and loose with user data. The social media network leaked personal emails, phone numbers, messages, pictures, and videos.

Data itself has become a product that can be sold, and with more than 24 billion visits to the top 50 AI tools last year, according to  Writterbuddy, the risk has never been higher. Providing unique data sets, as collected by artificial intelligence companies, makes for a very lucrative venture when that data falls into the wrong hands.

Indeed, in March 2023, a bug that exposed thousands of chat titles, the first message of new conversations, and payment information from ChatGPT Plus users came to light. OpenAI apologized for the leak, but it has come under scrutiny for how it protects user data.

As MetaNews previously reported, ChatGPT was temporarily banned in Italy for this very reason. Regulators were concerned about privacy breaches as defined under the GDPR.

Researchers from the University of North Carolina also found that centralized large language models such as ChatGPT or Google’s Gemini continously leak sensitive data, including financial records, even after it has been deleted the chatbot creators.

Decentralized AI: Reimagining the wheel

While data stored on the blockchain is impossible to tamper with, decentralized AI platforms like Elna are not without their own challenges, even with the promise of a future where users control their data.

“The primary challenge in decentralized AI is the lack of a foundational infrastructure and framework that exists in web2, necessitating a reimagining of the wheel for AI in web3,” said Elna CEO Arun PM.

He also believes that existing global regulatory frameworks “are not fully equipped to address the unique challenges and privacy concerns posed by centralized AI systems.” Arun said regulators often lag behind key advances in technology, which makes it harder for them to effect timely data and privacy protection laws.

But what happens in the event that Elna’s decentralized AI model suffers a security breach that leads to the loss of user data? Is that even possible? Arun PM said such a breach could only occur at the user-created chabot level, not on Elna’s infrastructure.

“The decentralized nature of the platform allows for precise tracking and attribution of responsibility,” he detailed. “This model ensures that accountability is maintained, with the ability to identify and address breaches affectively.”

spot_img

Latest Intelligence

spot_img