Zephyrnet Logo

Is AI Improving A Broken Process?

Date:

Significant improvements in verification are possible, but getting critical mass behind them will be difficult.

popularity

Verification is fundamentally comparing two models, each derived independently, to find out if there are any different behaviors expressed between the two models. One of those models represents the intended design, and the other is part of the testbench.

In an ideal flow, the design model would be derived from the specification, and each stage of the design process would be adding other details until the final design is obtained. Information should not have to be added twice, which is often the source of mistakes or results in models becoming out of sync with each other.

What also should be part of that flow is a clear path for verification, and this should come from the requirements document. Requirements are what should drive the entire verification flow. We do see that in small part with verification planning, which lists the features and then how those features are to be verified. But that is an ad-hoc process, leading to an even more ad-hoc metric called functional coverage.

If both the specification and requirements models were formalized, it would be much easier to see how AI and ML could be used to understand the relationships between those models and thus be able to provide direct help to tasks such as filling coverage goals, debugging, and this in turn would lead to a better overall process. It would also fulfill the requirements of today high-reliability flows that require full tracking from requirements down to implementation details and tests.

We need to go back and look at utilizing formalized requirements. Each property, or each run, is directly associated with ensuring that a particular requirement is met. Each requirement has its own definition of what complete means. If a requirement is dropped, then it becomes clear which tests can be omitted. Or when an agile approach is used, requirements can be added as the product matures and small incremental batches added to the regression suite.

It should be possible to start doing analysis about which parts of a design are actors in fulfilling a requirement, potentially finding redundant activity that doesn’t contribute to the outcome, and which thus can be seen as a candidate for power savings.

Functional coverage was a significant advancement at the time. Without it, constrained random verification techniques would not have been as successful as they have been. But they are also problematic in that there is no direct association with actual functionality. It is a model that is unconnected to anything else. This unnecessary indirection will make it more difficult for machine learning to do a good job. Replacing it with something directly derived from or associated with requirements would enable coverage gaps to be located easier, filled easier, and to provide greater degrees of tracking about costs associated with particular requirements.

But the semiconductor industry hates disruption in the design flow. It has been tried many times and failed, especially when it comes to changing levels of abstraction. Over time, this will become even more difficult because shift-left is causing increasing amounts of interlinking between tools, and that means any change has a greater impact on the number of tools it touches. Individual tools are developed in separate groups within each EDA company, and each is responsible for the success of its own tools, meaning that a change of this type would require full corporate backing. For that to happen, there first has to be sufficient demand, and that is also increasingly unlikely.

Does that mean we are doomed? Not at all. Many changes in the past came about when it became clear that gains were significant enough to overcome the negative impact of the disruption. Could AI be the initial step in that? If AI can act as a link between requirements and the verification flow, then over time we might see new languages or tools emerge that would strengthen the ability of the AI tools, and that in itself could spur the necessary momentum to slowly replace the broken system we have today.

Perhaps the more pertinent question is, will this happen? Or will the immediate gains that AI technology provides satisfy the immediate needs? That could mean there is no need to look further for greater gains. This is basically the incremental improvement path, and while the eventual gains may be a fraction of what they could have been, those incremental improvements are enough to quell the disruption.

I would place my money on the latter outcome, but I would love to be proven wrong.

Brian Bailey

Brian Bailey

  (all posts)

Brian Bailey is Technology Editor/EDA for Semiconductor Engineering.

spot_img

Latest Intelligence

spot_img