Zephyrnet Logo

Energy breakthrough needed for AGI, says OpenAI’s Altman

Date:

AI In brief OpenAI CEO Sam Altman believes that a breakthrough in energy production is required to advance increasingly capable and power-hungry AI models.

In a panel discussion with Bloomberg at Davos last week, he argued “There’s no way to get there without a breakthrough.” Altman is in favor of renewable energy sources like nuclear fusion, and he is motivated to keep investing in the technology. He personally invested $375 million into Helion Energy – a nuclear fusion startup that has signed a deal to supply energy to Microsoft in the next few years.

AI models made up of billions of parameters require huge amounts of energy to train. OpenAI’s old GPT-3 system reportedly consumed 936 megawatt hours (MWh), according to AI company Numenta. The US Energy Information Administration estimates that the average household consumes about 10.5 MWh per year. That means training GPT-3 consumed as much energy as about 90 households consume in a year.

Larger models will require even more energy. “We’re not done with scaling [LLMs] – we still need to push up,” Aiden Gomez, CEO of Cohere, declared during another discussion at Davos.

AlphaGeometry represents a breakthrough in AI reasoning

Google DeepMind researchers have trained an AI system to prove geometric theorems at nearly the same level achieved by human mathematics Olympiad gold medalists.

In a paper published in Nature last week, the team DeepMind unveiled AlphaGeometry – a system made up of a language model and a symbolic deduction engine. The former generates potential mathematical strategies to solve a particular problem, while the latter attempts to deduce a final solution.

“With AlphaGeometry, we demonstrate AI’s growing ability to reason logically, and to discover and verify new knowledge,” wrote co-authors Trieu Trinh and Thang Luong. “Solving Olympiad-level geometry problems is an important milestone in developing deep mathematical reasoning on the path towards more advanced and general AI systems.”

Interestingly, the system was trained on 100 million samples of synthetic data, depicting random geometric diagrams. AlphaGeometry was tasked with learning all the relationships between the points and lines in shapes to figure out all the geometric proofs.

In a test benchmarking the system’s performance, it managed to solve 25 out of 30 geometry questions from Olympiad competitions – given a few hours. For comparison, the average human gold medalist can solve about 25.9 of these in the same time.

Google DeepMind has released the code for its model here.

Medical AI chatbots may not democratize healthcare

The World Health Organization (WHO) isn’t optimistic that medical AI systems will be good for poorer countries if they are built by orgs in wealthier nations that fail to train them on more diverse data.

Developers like Google believe that AI can help those that have limited access to healthcare in the future. But officials at the WHO believe that the technology may not adequately serve them – especially if they aren’t representative of the patients used to source clinical data to train these systems.

“The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” declared Alain Labrique, the WHO’s director for digital health and innovation, Nature reported.

Labrique and his colleagues argued that the development of medical AI shouldn’t be dominated by large tech businesses, and that their technologies should be audited by independent third parties before they are released. Developers are currently building models capable of automatically generating clinical notes from meetings, aiding doctors to diagnose diseases, and more.

Potential issues – such as different accents, languages, or medical histories that aren’t in its training data – could potentially throw these systems off, leading to lower performance and poor patient outcomes.

Amazon rolls out experimental AI shopping assistant

Consumers can now quiz an AI chatbot about a particular item sold on Amazon with the online marketplace’s mobile app.

The “Looking for specific info” tab, which used to show product reviews and answers to common questions, has been replaced with a large language model, Marketplace Pulse first reported. The system seems to work by ingesting and summarizing information from the product’s listing page.

Netizens can ask questions about the item they’re interested in. The chatbot doesn’t compare products or suggest alternatives. Nor can it carry out actions like adding items to shoppers’ virtual carts or disclose pricing history. An Amazon spokesperson confirmed to CNBC that it was testing the chatbot.

“We’re constantly inventing to help make customers’ lives better and easier, and are currently testing a new feature powered by generative AI to improve shopping on Amazon by helping customers get answers to commonly asked product questions,” explained Maria Boschetti. Like all chatbots, Amazon’s latest system is prone to hallucination – so take what it says with a pinch of salt.

Interestingly, the virtual shopping assistant’s capabilities are quite open ended. It can reportedly write jokes, poems, or even generate code based on information on a product across multiple languages. ®

spot_img

Latest Intelligence

spot_img