Zephyrnet Logo

The critical role of memory for the adoption of AI in industrial applications | IoT Now News & Reports

Date:

Artificial Intelligence (AI) has burst on to the scene in a big way and the technology is diffusing out of data centres and into a wide range of distributed locations, enabled by more capable processors and more innovative algorithms. But other enabling technologies will need to keep pace, or risk becoming bottlenecks.

The fast-evolving demands of AI applications, particularly at the edge of networks and on-board connected devices, will place ever greater demands on the memory that supports those applications David Henderson, the director of the industrial segment at Micron Technology tells Jim Morrish.

Jim Morrish: Can you tell me a little about your role at Micron, and the trends that you are seeing in the AI space?

David Henderson: I lead Micron’s industrial and multi-market segment, focusing on diverse industrial applications using our broad portfolio of memory and storage solutions. It’s an extremely fragmented space and includes applications such as video security, factory automation, medical devices, retail applications, transportation, aerospace and defence applications to name a few.

In my role, I see that AI is gaining strong traction in the industrial space, including at the edge and on-board devices. The momentum is such that it is clear that AI will be found on-board nearly all industrial devices eventually. Right now, we’re still in the foothills of this full market potential, but even now AI is rapidly being adopted for core industrial and manufacturing equipment.

Micron’s mission is to keep on board with the latest processors and ASICs coming into the market, ensuring that Micron memory product portfolio develops in line with the needs of the next generations of processors and AI accelerators, and the more sophisticated AI systems that they will enable in new contexts.

JM: So AI processors and memory must evolve hand-in-hand, to most effectively unleash the potential of new and innovative AI algorithms?

DH: Memory is a crucial part of any AI solution. Historically, most AI processing has happened in the context of cloud data centres, but increasingly it is diffusing out to the edge and on-board internet of things, or IoT, and other connected devices. As AI migrates to the edge, so does demand for high performance memory at those locations increases. Right now, we’re seeing a procession of AI solution types out to the edges of networks, starting with inference, and evolving to training at the edge.

The benefits that such applications can unlock can be profound. AI at the edge can significantly reduce the communications bandwidth required to support AI devices, and at the same time enable real-time feedback to any connected system of those devices. In many circumstances these kinds of changes can both reduce costs and increase revenues for any use cases that are enabled by AI.

And there’s more to come. enerative AI has not yet been widely deployed at the edge, certainly in the context of industrial equipment, but the time will come when it will be. And when that happens, the demands placed on memory will significantly increase in terms of memory density to store reference data and the bandwidth over which that data must be exchanged with processors.

Unless we plan ahead, we may find ourselves in a situation where memory for distributed IoT and other connected devices becomes a constraint. So it’s critical to focus on the emerging needs of this segment, and to work with specific constraints related to growing model sizes, increased bandwidth requirements, lower power consumption, and driving toward leading edge technology nodes.

JM: How do these developments impact Micron?

DH: AI is one of the main drivers of Micron’s continued transformation. Fundamentally, there’s a critical need to match the kinds of memory solutions that we provide to the wide diversity of potential use cases.

Take, for instance, video analytics for security cameras. A low-level solution might encompass basic detection and classification. Meanwhile a more sophisticated solution could include facial recognition and behavioural analysis, and the most sophisticated solutions (as of today) could extend to include contextual analytics. These are all AI solutions but the difference in computational power needed to support these, in terms of tera operations per second (TOPS), is significant. The need to keep up with faster processors drives corresponding variations in requirements for memory data processing that are between 4x a standard video camera at the lower end and up to 16x for today’s more sophisticated security video analytics solutions.

This kind of video analytics application is just one example. There are other AI applications that are intrinsically less complex, and potentially more complex than video security applications. For instance, when machine vision analytics are deployed to support quality assurance in the context of a manufacturing production line, it highlights a potential requirement for local supervised learning on-board, or adjacent to, those cameras. That’s a whole new level of sophistication, with associated processing and memory bandwidth requirements. Micron prioritises working with customers to understand their compute needs and walking them through the nuances of memory technologies to optimise their solutions. The specifications for memory density, power consumption and memory bandwidth throughput are critical to individual use cases, and Micron invests in research and development to cross-optimise these parameters.

JM: Looking to the future, how do you think that this space will evolve?

Well, we will certainly see a significant and sustained uptick in the deployment of AI, both in terms of an extension of traditional industrial systems, as well as innovative adoption into new use cases that we’ve not seen in the past. Leveraging generative AI and large language models (LLMs) at the edge as part for industry’s digital transformation will only continue to highlight the need for more data – where memory and storage are critical components.

In a vast array of situations, AI can enable higher yields, more uptime, greater efficiencies, and higher quality. It can really make a difference across diverse sectors such as retail, transport and telehealth enabling better results with less costs and resources.

The potential for AI is huge. Even what’s been done today has had a profound impact, but it’s only the tip of an iceberg. It is truly exciting to see the role that memory plays in unlocking these future benefits associated with AI.

Comment on this article via X: @IoTNow_

spot_img

Latest Intelligence

spot_img