Zephyrnet Logo

Improving PPA With AI

Date:

AI/ML/DL is starting to show up in EDA tools for a variety of steps in the semiconductor design flow, many of them aimed at improving performance, reducing power, and speeding time to market by catching errors that humans might overlook.

It’s unlikely that complex SoCs, or heterogeneous integration in advanced packages, ever will be perfect at first silicon. Still, the number of common errors, and even some unexpected anomalies, can be greatly reduced if AI/ML/DL tools know what they’re looking for, no matter how dense the array of transistors or how complex the mix of compute elements. Moreover, AI/ML/DL can help identify where dynamic power density will cause problems and offer options for thermal dissipation.

Consider physical verification, for example, which is the last check before design sign-off. “We’re at a junction where we are starting to see the need for some artificial intelligence techniques,” said John Ferguson, director of product management at Siemens Digital Industries Software. “But the idea of artificial intelligence is a little different. Our whole existence is about being sure that we’re finding every single problem and every single error. One of the limitations of artificial intelligence is that it’s all about making fast and pretty accurate guesses, and pretty accurate is not good enough. But there are some things that we can do. When you’re doing physical verification, it’s not a ‘one and done.’ Much as we’d like to get to the concept of correct by construction, we do make mistakes, and we fix them and run again and find new mistakes.”

AI can be trained to spot those mistakes across multiple designs, which in turn can help speed up the design process, reducing the need for some guard-banding and improving the whole PPA equation. It also can be used to optimize performance by analyzing more data faster.

“Using AI-based design technology, our customers have indicated they were able to achieve significant reduction in power — as much as 25% or more compared to manual tuning,” said Stelios Diamantidis, senior director of artificial intelligence solutions at Synopsys. “This kind of improvement over already-optimized designs is amazing.”

While EDA companies haven’t talked much about AI until recently, they have been exploring possibilities for where and how it can be best applied for the better part of a decade. In 2020, an experiment aimed at optimizing the Figure of Merits (FoM), defined as “the weighted sum of the normalized performance metrics,” took place at the 57th Design Automation Conference. That experiment showed AI-related deep reinforcement learning (deep RL algorithms) tools could outperform humans in some tasks.

During the six-hour experiment, a graph CNN-based reinforcement learning (GCN-RL) algorithm went up against other techniques, including conventional black-box optimization methods (Bayesian optimization, evolutionary algorithms), random search, and a human expert designer with five years’ experience. The experiment concluded that RL with transfer learning can achieve better FoM. In other words, the AI-based tool can make transistor sizing and design porting more effective and efficient. (Table IV.)

Today, many companies, including Google, NVIDIA, Synopsys, Cadence, Samsung, and Siemens, either have started employing, or anticipate using AI in chip design. So how will AI change the chip design landscape?

Chip design process and challenges
Until recently, chips were designed by humans using various automated design tools in circuit and logic design, routing, layout, simulation, and verification to minimize errors while reducing time and cost. The process can be quite tedious and time-consuming.

Fig. 1: Source: einfochips

Fig. 1: Source: einfochips

There are many steps in designing a chip. The process starts with a specification or architectural definition, followed by logic design and RTL synthesis. Once the circuitry has been verified, the floor planning, routing, and verification takes place. Finally, an IC layout file in a digital format, such as the commonly used graphic design system (GDS II) file is sent to the foundry.

The three biggest challenges design teams face today are rising complexity, rising costs across the entire supply chain, and ongoing pressure to shrink the delivery time to fit increasingly narrow and domain-specific market windows.

Chip design complexity has increased over the years as more and more transistors are squeezed into an ever-shrinking die size. The transistor count has gone from thousands to billions. Die sizes are often at reticle size for single chips, and transistor density is increasing at each new node. In addition to logic and circuitry, chip designers have to be concerned with structure, geometry, proximity effects, and use cases.

Fig. 2: The semiconductor design space. Source: Cambrian AI Research

Fig. 2: The semiconductor design space. Source: Cambrian AI Research

Fig. 2: The semiconductor design space. Source: Cambrian AI Research

Placing billions of transistors and an assortment of interconnects, memories, I/Os, and power management systems in a small area is a daunting task. Each decision about where to place the components or the blocks may impact the chip’s performance and reliability. Expert designers rely on experience and know-how to achieve the lowest possible power consumption and highest possible performance in the smallest possible die area. Optimizing PPA is a constant challenge, with many tradeoffs.

It’s also expensive, and the cost is going up significantly. Handel Jones, CEO of International Business Strategy (IBS), said the average cost of designing a 28nm is $40 million. “By comparison, the cost of designing a 7nm chip is US$217 million and the cost of designing a 5nm device is US$416 million. 3nm design will cost up to US$590 million.”

That’s just the design side, too. But the constant pressure for chip manufacturers to introduce smaller die-size products in less time can increase the number of errors, not all of which can be easily fixed.

What AI brings to the table
AI can be used to help manage complexity in chip design, reducing errors and shortening the development cycle on multiple levels. For example, using traditional tools to do routing in chip design can automate 90% of the work. An experienced designer is still needed to finish up the last 10%, which may be the design process bottleneck. But by allocating resources to the most challenging problem, AI can add efficiency throughout the design process.

“It is all about efficiency,” said Steven Woo, fellow and distinguished inventor at Rambus. “Essentially, human designers use tools to achieve optimization. But AI can make it faster in less cycles. The AI engine can be fed preset rules to achieve better inference. Applying the reinforcement learning rule, the AI-based design tools will get better and better. It will help designers in achieving almost error-free solutions over time with efficiency in optimizing PPA better than humans alone can. Additionally, because speed is everything here, it is important to also consider the chip-to-chip memory speed as AI needs to access a large database quickly.”

Others are equally enthusiastic. “AI will automate chip design even further, especially in the layout process,” said John Stabenow, product engineering director for the IC Design group at Siemens Digital Industries Software. “It has been demonstrated that productivity has been increased using machine learning in analog circuit design already. On layout, machine learning will be used to suggest optimal device placement in finFET nodes to minimize interconnect parasitics. When a chip design involves MEMS, such as accelerometers and gyroscopes, AI can be used in a parametric design flow to co-design the IC and the MEMS device. This will allow designers to integrate MEMS, IC, and software a lot quicker than using traditional design flows, making designer’s life a lot easier.”

How AI learns
AI machines can do a much better job in pattern recognition and matching in a very short amount of time, and they do it much faster than humans. AI does not start learning from ground zero. In most cases, the AI agent (processor) will be pre-trained or fed a large amount of data, such as 15,000 samples of floor planning. At this point, the AI machines have already gained some intelligence.

Additionally, an AI machine will make use of reinforcement learning (RL) to make it even smarter. RL is a machine learning technique to help the agent learn in its interactive environment by trial and error from its own experiences.

The process uses a reward and punishment model. The AI model will start with an initial state (input), and deliver certain results (output). The human user/designers then will reward or punish the model. The model will keep learning and deliver the best results based on maximum rewards received. When a human designer accepts a suggestion from the AI model, it would be considered by the AI model as a reward. Conversely, when the AI suggestion is rejected or overruled by the human designer because he or she thinks a better solution is available, the AI model would consider this as a punishment. The RL learning process goes on. Over time, the AI model will be getting better and better.

“Machine learning is a subset of AI that refers to as a machine’s ability to think without being externally programmed. Traditional devices are programmed with a set of rules for how to act, and then this takes the form of if-then-else statements. But machine learning enables devices to continuously think about how to act based on data they intake,” suggested Ravi Subramanian, senior vice president and general manager for Siemens Digital Industries Software, “In the context of IC design, for AI to learn, three things are necessary. First, a pool of data must be available. This is the data lake. It can take the form of RTL IP, GDS2, C-code, SPICE netlist. Second, it needs a model enabling the AI-based system to adapt, learn, improvise, and generalize itself so it can predict based on new inputs, not from the data lake. And third, a decision function based on some metric must exist, and a reward mechanism based on achieving the metric should be reliable.”

Subramanian noted that AI does not make decisions per se. “AI is about a system’s ability to adapt and improvise in a new environment, to generalize its knowledge and apply it to unfamiliar scenarios. This definition is taken from Francois Chollet, head of AI research at Google.”

How to measure the outcome of using AI?
Unlike driving an automobile, in which you have a standard way to measure fuel efficiency, there is no standard way of measuring the outcome of using AI that is accurate or even useful. Each design is unique, and the tools used vary. Nevertheless, there are numerous reports from chipmakers and EDA companies of productivity improvements using AI-based chip design tools.

Google’s floor-planning is a case in point. In an exercise of doing chip floor-planning, AI accomplished in less than six hours what took months for a team of physical design engineers using the latest design tools, laboring intensively for a few months. Both delivered results of manufacturable chips with PPA optimization, but there was a big difference in productivity.

“Adding AI to the chip design process definitely will increase its efficiency,” said Rod Metcalfe, product management group director in the Digital & Signoff Group at Cadence. “To help achieve better PPA, based on the reinforcement learning engine, AI can further optimize the design flow and floorplan optimization. For example, a 5nm mobile CPU using AI will be able to have performance increase by 14%, leakage power improvement by 7%, and density by 5%. This can be significant.”

Samsung reported that in one design, the engineering team could choose to increase performance by 15%, reduce size by 18%, or increase performance by 8% with smaller die by using AI.

The future of AI in chip design
Squeezing 1 billion transistors into a die is unthinkable to most people. But in June 2021, Synopsys reported that its largest chip built so far has 1.2 trillion transistors and 400,000 AI-optimized cores at 46,225mm2. To design chips this size is almost impossible for human designers using traditional design tools, let alone improving PPA.

“The benefits of using AI to accelerate and optimize chip design is now a given, at least as far as the major chip vendors are concerned,” said Karl Freund, founder and principal analyst, Cambrian AI Research. “Systems like Synopsys DSO.AI are saving companies time and money, and producing chips with lower power, higher performance, and less area. Now the industry is turning its attention to the next steps beyond optimizing physical designs, such as system-level optimization, software/algorithm optimization, and even design verification. The entire industry will benefit from these innovations, as will the consumers of faster, less power-hungry, and lower-cost silicon.”

How will the market behave? The AI chip design market is heating up, and every EDA vendor is now either considering how to apply AI, or figuring out where else it can be used.

Fig. 3: Designing wafer scale chips. Source: Cerebras Systems

Fig. 3: Designing wafer scale chips. Source: Cerebras Systems

Fig. 3: Designing wafer scale chips. Source: Cerebras Systems

Sorting through complexity is critical, whether it’s on the same die or multiple dies, or in the case of Cerebras Systems, an entire wafer. Cerebras’ second-generation 7nm-based Wafer Scale Engine 2 (WSE-2) has 2.6 trillion transistors and 850,000 AI optimized cores. It is now the world’s largest chip for AI applications, and it is roughly the size of a dinner plate. By comparison, the largest GPU has only 54 billion transistors. Cerebras’ chip requires 40GB of on-wafer memory to support the AI computations. To design such a chip, AI-based chip design tools are required.

More opportunities
This is just the beginning of where AI can be used in designing chips. For example, as chips are used in more safety- and mission-critical applications, they also are required to be more secure. Integrating security into a large, complex chip is a challenge, and one that requires understanding of an entire system on a chip or in a package. AI is especially good for setting up matrices of possible attack vectors across the chip or chips within an advanced package.

There are myriad other applications, and new tools and innovations are expected to show up in the near future. In most cases, they will follow the same pattern. But it’s also important to remember this isn’t a universal fix, and one size does not fit all.

“As a starting point, consider the definition of the problem you are trying to solve and the definition of what it means to say that you are using AI,” said Siemens’ Subramanian. “So an IC designer must first see if there is a problem that can be tied to a system’s ability to adapt to, learn, and generalize knowledge/rules, and apply these knowledge/rules to an unfamiliar scenario. Understanding whether there is a problem that is well-suited to AI is the first and most important step. This is perhaps the most important phase of the entire process.”

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?