Zephyrnet Logo

Entire tech stack rethink needed to solve AI energy crisis

Date:

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) on Wednesday celebrated five years of cat herding, which is to say shepherding the responsible development of machine learning.

Following optimistic introductory remarks from HAI leadership about the plausibility of designing systems that augment people instead of replacing them, the opening panel made clear that artificial intelligence will be increasingly informed by our understanding of human intelligence.

HAI’s goal is to keep people and communities at the center of AI design, but human-centered AI can also be taken as a nod to the increasing relevance of neuroscience.

Simply put, the human brain is orders of magnitude more energy efficient than silicon-based processors, to say nothing about wetware’s evident intellectual superiority and ability to reason and learn.

The place where computing went wrong was the digital decision … Biology is completely different

“The place where computing went wrong, unfortunately, was the digital decision,” Surya Ganguli, an associate professor of applied physics at Stanford, told scientists, academics, and other experts gathered at the HAI at Five conference today.

“We decided to store information in bits which were in turn stored and flipped by shuttling many, many electrons around through complicated transistor circuits. Every fast and reliable bit flip requires, by the laws of thermodynamics, a large energy expenditure. So we expend a lot of energy in the intermediate sets of the computation.

“Biology is completely different. The final answer is just good enough and all the intermediate sets are slow, noisy, and unreliable. But not so unreliable that the final answer isn’t just good enough for what’s required … So I think we have to rethink the entire technology stack from electrons to algorithms in order to really go from megawatts to watts.”

Surya Ganguli, associate professor of applied physics at Stanford

Surya Ganguli, associate professor of applied physics at Stanford speaking at HAI today … Click to enlarge

AI’s enormous and growing demand for energy is a critical problem that needs to be solved. So too are other inefficiencies, like the way machines learn compared to children, an area of active study among some of the panel participants.

Jeff Hawkins, founder of Numenta, argued that sensory motor learning, not today’s AI, will be central to the science of intelligence, artificial and natural.

And toward that end, he announced the Bill and Melinda Gates Foundation has funded his company’s Thousand Brains project, a general AI framework that aims to reverse engineer the human neocortex. Hawkins said open source code will be released.

Merging a machine and a brain is going to be very difficult … I don’t think we want to do that

Hawkins during the panel discussion at the California university offered some reassurance that the interplay between artificial and human intelligence isn’t about cybernetics – the merger of person and machine, “Merging a machine and a brain is going to be very difficult,” he said.

“But more importantly, I don’t think we want to do that. At least I don’t want to do that.”

Allowing that there are worthwhile uses of direct connections to the brain – to help those who are paralyzed, for example – Hawkins said the focus for such research should be developing tools that help people.

“I don’t think we’re all going to have cables coming out of our heads, but I could be wrong,” he said.

Ganguli sees the science of the mind informing the way the machine learning technology stack.

“I think the secret is figuring out what these design principles are and instantiating them in our AI systems,” said Ganguli. “Right now we scale transformers just by adding layers, maybe increasing the embedding dimension, and that’s it. We don’t have deep principles there.” ®

spot_img

Latest Intelligence

spot_img