Zephyrnet Logo

Arm gets edgy: Tiny neural-network accelerator offered for future smart speakers, light-bulbs, fridges, etc

Date:

Meet the Ethos-U55 and the Cortex-M55 for edge devices

Arm has two new machine-learning processor designs to help developers run specialised AI workloads on chips for IoT devices.

The more powerful of the two, known as Cortex-M55, is a CPU blueprint, while the other, named Ethos-U55, is essentially the makings of an AI accelerator. The Cortex-M55 is based on Arm’s Helium technology, specialised for vector maths. The Ethos-U55, on the other hand, is a novel architecture for Arm and has been described as a micro Neural Processing Unit, or NPU for short.

Both processors are designed to be used together, Thomas Ensergueix, senior director of the IoT & Embedded team at Arm, told The Register. “The microNPU cannot be used on its own; it needs to be paired with a CPU like the Cortex-M55. Together, this system delivers 480X the performance compared to previous Cortex-M generations working on their own.”

Arm only licenses the IP for its cores, so customers looking to build more customisable chips based on the processor designs will have to work together with the SoftBank-owned biz to finalise the silicon to be manufactured elsewhere. The Cortex-M55 and Ethos-U55 power IoT devices that require a little more computational power for the inference stage of neural networks.

The microNPU provides this boost by compressing trained models on the fly so they can be run speedily on the chip without relying on cloud and at low power by converting them to INT8 precision. The processor system is suited for applications like speech recognition or gesture detection in smart speakers and lights.

More complicated models that need to process data at higher precision, things like facial or object recognition, will need beefier machine-learning accelerators like Arm’s Ethos N-77 processor.

The initial deep-learning system can be written in any framework as long as it can be converted to run on TensorFlow Lite or PyTorch Mobile for the microNPU. These are both popular languages to process machine-learning workloads on edge devices, something Arm calls “endpoint AI”, such as microcontrollers, cameras or sensors.

“Enabling AI everywhere requires device makers and developers to deliver machine learning locally on billions and ultimately trillions of devices,” said Dipti Vachani, senior vice president and general manager, Automotive and IoT Line of Business, at Arm.

“With these additions to our AI platform, no device is left behind as on-device ML on the tiniest devices will be the new normal, unleashing the potential of AI securely across a vast range of life-changing applications,” she said.

Customers should expect their silicon to arrive early next year. ®

Sponsored: Detecting cyber attacks as a small to medium business

Source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/10/arm_cortex_m_ai_accelerator/

spot_img

Latest Intelligence

spot_img