Zephyrnet Logo

ChatGPT Creator to Open ‘App Store’ for Custom AI Chatbots

Date:

The boom in artificial intelligence is stretching the chip manufacturing industry to its limit leading to a shortage of GPUs – the basic processing units which power machine learning (ML) models. 

According to the crypto research and data specialist firm Messari, decentralized compute networks could present a ready-made solution.

Growing demand and GPU requirements

A new report from Messari examines the challenges faced by chip manufacturers such as Nvidia who are struggling to keep up with demand in the wake of AI mania. The high costs and limited chip availability pose concerns for the future deployment of AI applications.

The AI industry is dependent on the GPUs which are “essential for training and querying ML models,” says Messari. The spike in sales has left manufacturers unable to keep up, leading to a shortage .

There may be light at the end of the tunnel however, as a solution may already exist in the form of decentralized compute networks.

“Decentralized compute networks offer a promising solution by connecting entities with idle computing power, mitigating the GPU shortage,” tweeted Messari on Wednesday.

There are a number of cryptocurrency compute projects that could step in to help satisfy demand.

On the model training and fine tuning side Messari point to Gensyn and Together. On model inference side projects touted by Messari include Giza, Render, ChainML, Modulus Labs and Bittensor.

More general purpose compute networks are Akash, Cudos, iExec, Truebit, Bacalhau and Flux.

According to Messari, by harnessing the power of idle GPUs, the demand for high-end GPUs can be alleviated, reducing costs and enhancing accessibility for AI developers.

A whole lotta chips

A recent report by research firm TrendForce, reveals that ChatGPT may require over 30,000 GPUs from Nvidia, to efficiently process its training data.

TrendForce’s estimations are based on the computational capabilities of Nvidia’s A100 graphics card, priced between $10,000 and $15,000. Nvidia stands to generate substantial revenue, potentially reaching $300 million, owing to the high demand fueled by ChatGPT.

The demand for GPUs in AI is experiencing exponential growth as ML models become more complex, necessitating larger parameter models and increased computational power. The advent of transformers and their application in language modeling has further amplified the computational requirements, doubling these demands every 3-6 months. 

Political tensions and GPU supply constraints

A Newtown blog on decentralized computing in AI and ML suggests that political tensions contribute to the constraints in GPU supply. Semiconductor production relies on a complex stack of mechanical, physical, chemical, logistical, and commercial factors. 

Taiwan, which accounts for 63% of the semiconductor foundry market, holds a stronghold in the global supply chain. However, geopolitical tensions between the US and China create uncertainties and potential threats to the semiconductor industry, highlighting the need for diversified supply chains.

The blog further confirms that cloud providers, such as AWS, GCP, and Azure, offer GPU rentals but need help with pricing and availability. 

Continued fractious relations between the US and China therefore presents a significant opportunity for decentralized compute networks.

spot_img

LifeSciVC

VC Cafe

Latest Intelligence

spot_img