Zephyrnet Logo

Verification Methodologies Evolve, But Slowly

Date:

Semiconductor Engineering sat down to discuss digital twins and what is required to develop and verify new chips across a variety of industries, such as automotive and aerospace, with Larry Lapides, vice president of sales for Imperas Software; Mike Thompson, director of engineering for the verification task group at OpenHW; Paul Graykowski, technical marketing manager for Arteris IP; Shantanu Ganguly, vice president of product marketing at Cadence; and Mark Olen, director of product management at Siemens EDA. What follows are excerpts of that conversation. Part 1 of this discussion is here. Part 2 is here.

SE: Systems companies have introduced the silicon design companies to the benefits of the digital twin. Will that become a part of our verification methodology?

Ganguly: The defense industry has embraced digital twins. I do see commercial companies more and more talking about digital twins. I have talked to customers who are doing digital twins, and it’s a very powerful concept. But how can I develop a digital twin over three years if I’m in a commercial space? I can’t. Products are going to get replaced much more quickly.

Lapides: The automotive industry has traditionally had a lot longer design cycles. But now they’re moving to more complex SoCs and they’re trying to significantly compress their schedules. Is that going to really be a problem for those automotive SoCs?

Olen: We used to do market analysis in vertical segments. Systems people, semiconductor, silicon people. Now you’ve got automotive and systems people who behave like semiconductors people. As we move into 3D-IC, with stacking and packaging, you’re going to have silicon people trying to behave like systems people. Each of those industries have strengths. Silicon has a high degree of sophistication, but they also have time-to-market issues. Now it’s all being brought together, and a lot of people are dealing with problems. Even the sophisticated people are dealing with problems they haven’t experienced or faced in the past.

Ganguly: The gotcha is that if you look at a modern car or a jet fighter, conceptually these are systems of systems. If I’m building a jet fighter, I can build it over an 8- or 10-year program. The design cycle is like a decade. It goes through a massively long phase of previous model, and so on. With some of these newer cars, they put out a new variation in about a year. Okay, they had a baseline, but getting a brand new car out in a year is a fantastic statement. There is an urgency where the landscape is evolving. Just like gas automobiles that evolved in the 1920s, there were plenty of players and then there was consolidation. But with building a system of systems, even if it’s a major tweak in a year, there are challenges. There are going to be post-deployment issues, and all of it is not going to be software.

Graykowski: In the ADAS market, companies that aren’t used to building chips for cars are now getting involved with that. All of a sudden, they are confronted with ISO 26262. It’s more than just taking the training. It is how you apply that to an SoC. Yes, we’ve applied it to the car level with airbags and things like that. But how do you get inside the SoC? And what about this AI block? You need traceability through the whole SoC, from the start through end of life. It’s a big change. Engineers are not ready for that, and they’re going to just see it as having to do more paperwork and be more burdensome.

Ganguly: The traditional consumer silicon companies that are coming into automotive don’t understand. They really don’t want to buy into the level of certification, or it is difficult for them to buy into it. You need to go through all of these loopholes before I can call my product done. And then there are companies coming in that look at the scale of what has to be done, and they don’t like it, but it has to be done.

Lapides: ISO 26262 is maybe analogous to 30 or 35 years ago when fabs had to go to ISO 9000. And having gone through that certification process, that was a huge disruption, with the necessary training, and it caused significant delays. But we had to get that done. This is possibly an interesting discontinuity that we’re going to have to go through.

Ganguly: But this is a one-time deal, and everybody will get on the bandwagon and be done. That’s what you’re thinking?

Lapides: Maybe, but you talked about the one-year timeline, and then there is the democratization of silicon. There are people new coming in, and maybe they’re naive because they think they can get things done. They can, but they don’t realize the investment that needs to be made in people, tools, and training those people. It’s not just hiring people. There’s training that’s involved here to get their expertise up to the level needed for success.

SE: The notion of software has come up a few times. As the world moves toward domain-specific computing, should we be talking about hardware/software co-design. We’ve mentioned a couple of times the need for running real software on hardware, but is there a methodology emerging for hardware/software co-verification? And who’s going to take responsibility for that?

Ganguly: We will see this Increasingly. For example, Arm has a set of tests that ensure your product is going to be able to boot Linux and essentially be at a prescribed level of verification. With this, you can clean your design very early. That’s a good example. And similarly, somebody can take our system VIPs and run this content to make sure that when all of the IP functionality comes together, everything that’s needed to be able to boot Linux is there. This is fundamentally a very good process. It’s basically a methodology to make sure your RTL is software-ready. Can we extract what it takes to be able to run software, distill that down to micro-ops and port sequences that you’re running much earlier, even pre-bare metal as part of your verification suite? You’ll see more and more of this. More people will cotton onto this, and more products will evolve.

Thompson: The industry needs a standard process for doing this. Organizations like Arm have so much experience in this that they will be able to build this methodology themselves and be successful. Smaller organizations, newer entrants, will need some help. I’ll just use the UVM as an example. The UVM standard is there and it is well documented, it’s well supported. You can hire people off the street that know it, and deploy it. This is true because there’s a set of rules for how to do this. This is how you build up a block or test bench. And this is how you can step that up to the system level, and so on so forth. For hardware/software co-verification, co-validation, code development, whatever you want to call it, we could use a similar methodology to provide a framework for teams other than the Arms of the world to be successful.

Ganguly: The problem is that conceptually it will depend on the functionality you’re looking at, and you can’t do this in the more general case.

Thompson: They said that as well before UVM showed up, and UVM disproves that point. If you conceptualize this and abstract this, you can come up with an abstract framework that applies to virtually every block. UVM can handle virtually any block you throw at it because it’s abstract. We need that kind of abstraction applied to hardware software.

Ganguly: At the risk of being flippant, that’s kind of like saying, ‘Here’s a bag of cement, you can build a house, you can build a skyscraper, you can build a bridge.’

Thompson: How do we use the bags of cement? I need a framework for using that. Standard size shovels, a standard size mixer, some guidelines on how much water versus cement versus gravel. We would need that kind of thing. It is just a bag of tools. That’s where we’re at today. ‘Here’s a bag of tools. What are they for? Which end of the shovel actually carries the gravel?’ We need to be able to provide that kind of guidance. And UVM does that for block-level verification.

Lapides: In the defense industry, these big projects always had a system engineer leading the technical side who was more of a generalist. Maybe they had some expertise in one area. Having criticized the push of the big box emulators before, I will admit they have their place. And there is a well-defined system methodology and verification methodology, starting with the virtual prototype at the beginning, going to hardware/software co-verification on an emulator. There’s more RTL, there’s the FPGA prototype, there’s the digital twin. But someone who actually understands those at a broad enough, and deep enough level, and has a verification plan to go with that across the board — those people are few and far between. We really don’t do anything to develop those sorts of engineers.

Graykowski: I would like go back to your UVM comment. I remember the days when VMM and UVM first came out. There were a handful of people who knew how to do that and do it well. And I would say the same with Vera and Specman. Those people made a fortune. They were so sought after. Probably what we need is to get those kinds of people, and then others will follow and say, ‘Hey, I can really do something with this.’ But the methodology has to evolve. Then we’ll have experts who know what they’re doing.

Thompson: I got a question recently from someone looking for a verification DevOps guy. ‘Do you know of anybody?’ What the heck is a verification DevOps guy? But when you think about it, it’s basically somebody who sits there and writes those Python scripts to grab things out of the coverage database. That’s a job function that didn’t exist when I graduated from university.

Lapides: In the early ’90s, people put Design Compiler on their resume. They were gold. And then there were the people who had Specman. Is there a new job definition that would benefit the tool side of the business as well as the engineers, because by shining a spotlight on this it could become another aspect of the solution?

Thompson: I used to work for a systems company that was a big user of emulators, and they had a process for using their emulators. It was exactly the same process they used for taping out the chip, but the milestones just meant something different. Basically, at the end of the day, we’re going to have an emulator with the full chip or a fraction chip, or whatever it is. ‘This is the software that’s going to run on it. This is the checklist, and we’re going to say that we’re done.’ And it looked a lot like the checklist to say that you tape out, but there wasn’t any GDS produced.

Ganguly: it’s an expensive step and shouldn’t be taken lightly.

Thompson: I’m a real big believer in emulation. I do believe it is customer pull as opposed to vendor push. But only the customers that need it do it, because it’s expensive.

spot_img

Latest Intelligence

spot_img