[tdb_mobile_menu menu_id="81451" el_class="plato-left-menu" icon_size="eyJhbGwiOjUwLCJwaG9uZSI6IjMwIn0=" icon_padding="eyJhbGwiOjAuNSwicGhvbmUiOiIxLjUifQ==" tdc_css="eyJhbGwiOnsibWFyZ2luLXRvcCI6IjEwIiwibWFyZ2luLWJvdHRvbSI6IjAiLCJtYXJnaW4tbGVmdCI6IjE1IiwiZGlzcGxheSI6IiJ9LCJwaG9uZSI6eyJtYXJnaW4tdG9wIjoiMCIsIm1hcmdpbi1sZWZ0IjoiMCIsImRpc3BsYXkiOiIifSwicGhvbmVfbWF4X3dpZHRoIjo3Njd9" align_horiz="content-horiz-center" inline="yes" icon_color="#ffffff" icon_color_h="#ffffff"][tdb_header_logo align_vert="content-vert-center" url="https://zephyrnet.com" inline="yes" text="Zephyrnet" image_width="eyJwaG9uZSI6IjM1In0=" img_txt_space="eyJwaG9uZSI6IjEwIn0=" f_text_font_size="eyJwaG9uZSI6IjE4In0=" f_text_font_line_height="eyJwaG9uZSI6IjEuNSJ9" f_text_font_weight="eyJwaG9uZSI6IjcwMCJ9" f_text_font_transform="eyJwaG9uZSI6ImNhcGl0YWxpemUifQ==" f_text_font_family="eyJwaG9uZSI6ImZzXzIifQ==" text_color="#ffffff" text_color_h="var(--accent-color)"]
[tdb_mobile_horiz_menu menu_id="1658" single_line="yes" f_elem_font_family="eyJwaG9uZSI6ImZzXzIifQ==" f_elem_font_weight="eyJwaG9uZSI6IjcwMCJ9" text_color="var(--news-hub-white)" text_color_h="var(--news-hub-accent-hover)" f_elem_font_size="eyJwaG9uZSI6IjE0In0=" f_elem_font_line_height="eyJwaG9uZSI6IjQ4cHgifQ==" elem_padd="eyJwaG9uZSI6IjAgMTVweCJ9" tdc_css="eyJwaG9uZSI6eyJwYWRkaW5nLXJpZ2h0IjoiNSIsInBhZGRpbmctbGVmdCI6IjUiLCJkaXNwbGF5Ijoibm9uZSJ9LCJwaG9uZV9tYXhfd2lkdGgiOjc2N30="]
[tdb_mobile_menu inline="yes" menu_id="81451" el_class="plato-left-menu" icon_size="50" icon_padding="0.5" tdc_css="eyJhbGwiOnsibWFyZ2luLXRvcCI6IjEwIiwibWFyZ2luLWJvdHRvbSI6IjAiLCJtYXJnaW4tbGVmdCI6IjE1IiwiZGlzcGxheSI6IiJ9fQ==" icon_color="#ffffff" icon_color_h="#ffffff"]
Zephyrnet Logo
[tdb_header_menu main_sub_tdicon="td-icon-down" sub_tdicon="td-icon-right-arrow" mm_align_horiz="content-horiz-center" modules_on_row_regular="20%" modules_on_row_cats="20%" image_size="td_300x0" modules_category="image" show_excerpt="none" show_com="none" show_date="" show_author="none" mm_sub_align_horiz="content-horiz-right" mm_elem_align_horiz="content-horiz-center" menu_id="81450" show_mega_cats="yes" align_horiz="content-horiz-center" elem_padd="0 30px" main_sub_icon_space="12" mm_width="1192" mm_padd="30px 25px" mm_align_screen="yes" mm_sub_padd="20px 25px 0" mm_sub_border="1px 0 0" mm_elem_space="25" mm_elem_padd="0" mm_elem_border="0" mm_elem_border_a="0" mm_elem_border_rad="0" mc1_title_tag="h2" modules_gap="25" excl_txt="Premium" excl_margin="0 6px 0 0" excl_padd="2px 5px 2px 4px" excl_bg="var(--news-hub-accent)" f_excl_font_size="12" f_excl_font_weight="700" f_excl_font_transform="uppercase" meta_padding="20px 0 0" art_title="0 0 10px" show_cat="none" show_pagination="disabled" text_color="var(--news-hub-white)" tds_menu_active1-line_color="var(--news-hub-accent)" f_elem_font_size="18" f_elem_font_line_height="64px" f_elem_font_weight="400" f_elem_font_transform="none" mm_bg="var(--news-hub-dark-grey)" mm_border_color="var(--news-hub-accent)" mm_subcats_border_color="#444444" mm_elem_color="var(--news-hub-white)" mm_elem_color_a="var(--news-hub-accent-hover)" f_mm_sub_font_size="14" title_txt="var(--news-hub-white)" title_txt_hover="var(--news-hub-accent-hover)" date_txt="var(--news-hub-light-grey)" f_title_font_line_height="1.25" f_title_font_weight="700" f_meta_font_line_height="1.3" f_meta_font_family="fs_2" tdc_css="eyJhbGwiOnsiYm9yZGVyLXRvcC13aWR0aCI6IjEiLCJib3JkZXItcmlnaHQtd2lkdGgiOiIxIiwiYm9yZGVyLWJvdHRvbS13aWR0aCI6IjEiLCJib3JkZXItbGVmdC13aWR0aCI6IjEiLCJib3JkZXItY29sb3IiOiJ2YXIoLS1uZXdzLWh1Yi1kYXJrLWdyZXkpIiwiZGlzcGxheSI6IiJ9fQ==" mm_border_size="4px 0 0" f_elem_font_family="fs_2" mm_subcats_bg="var(--news-hub-dark-grey)" mm_elem_bg="rgba(0,0,0,0)" mm_elem_bg_a="rgba(0,0,0,0)" f_mm_sub_font_family="fs_2" mm_child_cats="10" mm_sub_inline="yes" mm_subcats_posts_limit="5"]
Home Semiconductor How Mature Are Verification Methodologies?

How Mature Are Verification Methodologies?

0

Semiconductor Engineering sat down to discuss differences between hardware and software verification and changes and challenges facing the chip industry, with Larry Lapides, vice president of sales for Imperas Software; Mike Thompson, director of engineering for the verification task group at OpenHW; Paul Graykowski, technical marketing manager for Arteris IP; Shantanu Ganguly, vice president of product marketing at Cadence; and Mark Olen, director of product management at Siemens EDA. What follows are excerpts of that conversation. Part 1 of this discussion is here.

Imperas Software; Mike Thompson, director of engineering for the verification task group at OpenHW; Paul Graykowski, technical marketing manager for Arteris IP; Shantanu Ganguly, vice president of product marketing at Cadence; and Mark Olen, director of product management at Siemens EDA. What follows are excerpts of that conversation. Part 1 of this discussion is here.” width=”624″ height=”256″ data-recalc-dims=”1″ data-lazy-srcset=”https://i0.wp.com/semiengineering.com/wp-content/uploads/Fig01-1.jpg?w=624&ssl=1 624w, https://i0.wp.com/semiengineering.com/wp-content/uploads/Fig01-1.jpg?resize=300%2C123&ssl=1 300w, https://i0.wp.com/semiengineering.com/wp-content/uploads/Fig01-1.jpg?resize=600%2C246&ssl=1 600w” data-lazy-sizes=”(max-width: 624px) 100vw, 624px” srcset=”data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7″>

imageImperas Software; Mike Thompson, director of engineering for the verification task group at OpenHW; Paul Graykowski, technical marketing manager for Arteris IP; Shantanu Ganguly, vice president of product marketing at Cadence; and Mark Olen, director of product management at Siemens EDA. What follows are excerpts of that conversation. Part 1 of this discussion is here.” width=”624″ height=”256″ srcset=”https://i0.wp.com/semiengineering.com/wp-content/uploads/Fig01-1.jpg?w=624&ssl=1 624w, https://i0.wp.com/semiengineering.com/wp-content/uploads/Fig01-1.jpg?resize=300%2C123&ssl=1 300w, https://i0.wp.com/semiengineering.com/wp-content/uploads/Fig01-1.jpg?resize=600%2C246&ssl=1 600w” sizes=”(max-width: 624px) 100vw, 624px” data-recalc-dims=”1″>

SE: We are seeing new entrants in the silicon development market. Is it possible that the decrease in first-time silicon success is related to this?

Ganguly: If somebody is coming from the software world, they inherently do not understand that doing a hardware model is expensive. If I’m building a software program, I compile and link. It takes minutes.

Graykowski: Welcome to my world.

Ganguly: There are two fundamental things that somebody from a software engineering background doesn’t understand. First, when doing a physical build using synthesis to the point where you have a model that you can run timing on, it’s a matter of days or weeks, not minutes. The cost of fabrication, the time taken, the cost of a re-spin, people don’t get that. And then the second piece is the logistics. You can’t FTP hardware. I can build a software product and put it on the website, and people can download that and think they can fix bugs, but it doesn’t work that way. This is extremely tedious, and the latencies are days, weeks, months. It’s a different paradigm. That’s why people spend so much money on verification, so they don’t go through these loops.

Lapides: When we think about open source, it’s a little bit broader than just the RISC-V community. Because of the freedom to add custom instructions in RISC-V, you are seeing more system people designing their own processor. They’re coming from the system and software world into hardware, and they are naive. They are trying to cut corners, and they are thinking, I don’t need tools from the Big Three. I’m going to use the open-source tools. They don’t have the scars. They haven’t gone through the pain.

Thompson: We shouldn’t be too smug about it because we were all there — but a long time ago.

Ganguly: There are ramifications to being able to add your own instructions and rebuild the core. Some of it is covered, like the compiler, where you can get a compiler that understands these new things. What about debug? What about protocols? There are a lot of other ramifications that we can stumble across, one after another. That can be very painful.

Graykowski: I’ve seen some interesting things, coming from the block level to the system level. There are many folks looking to improve their flows and methodologies. Not that long ago, we were just hacking together SoCs by text editing, which is very error prone. That brings a lot of issues, and it’s just tedious and painful. One of the things we are starting to see is hardware and software having a model that you can share, a central database where if something changes, it propagates out to everyone. If you have a hardware team and software team that are disconnected, and every once in a while they throw a model over the wall, how do you keep those in sync? Maybe you get a new IP, plug it in, and it changes the whole system and everything goes south. Coming up with more formal ways to build these systems, putting them together, and managing the overall process is key. A lot of folks are starting to adopt that type of methodology and bring that together. That’s not necessarily saying it will eliminate bugs, but if you can streamline that process and automate it, fewer bugs will propagate through.

Olen: That’s a really good point, and it happens even within the hardware development, let alone across different domains. From design to verification, there are many different areas of investment to help solve this problem. Does anyone really think they can say, we have the best of everything? Customers might get their formal tools from one place, or simulation from another. I hate to jump into the ‘I’m all for standards position,’ but where we have some standards, whether it’s UPF or UVM or others, we don’t do a great job of really adhering to them. We have customers that want to run heterogeneous simulation, but doing that is a lot harder than it probably should be. Why can’t I take my coverage data from one vendor’s simulator and another vendor’s formal tool, assertion tool, and bring that all into one environment? It should be easy to be able to analyze that data and make decisions.

Graykowski: I’m so happy to hear you say that.

Ganguly: People can, and I do see some customers do some of that.

Lapides: With great pain. As a consumer of EDA tools, I can tell you, that is really important.

Ganguly: There is a standard for exporting and integrating coverage data, and I have seen companies take coverage data from multiple sources.

Thompson: And I bet that team had a killer Python coding person, or team that made that happen. It did not happen out of the box.

Olen: And probably involved a very sophisticated team.

Ganguly: These are expert people who have invested a lot of effort in doing it. It is not easy.

Graykowski: I have done it as a contractor. And that’s exactly what we did. It was an in-house coverage tool versus one from the Big Three. It was not the best standard, but yeah, there was a way to do it. And I remember correlating that data and putting it together. But it was a full-time job for me.

Thompson: So that’s a full-time job. I’m already spending all this money on the EDA tools. How come I have to hire you, a really skilled guy, to do this work when I should be able to just turn it on.

Ganguly: A previous point about conformance to standards is interesting. Having lived through that experience in multiple companies, the problem is not LRM compliance. The problem is excursions from the LRM [Language Reference Manual] that are tolerated by a particular product. That’s the challenge. Everybody is LRM-compliant. But we also have something else that somebody has all over their design that the other company doesn’t.

Thompson: At OpenHW, we run continuous integration regressions using five commercial SystemVerilog simulators. That means we have to use the lowest common denominator implementation of our test benches and our RTL, because you’re right. We have an LRM, but it’s a little bit like the Bible. Everybody who reads it gets something different out of it.

SE: We talked about new customers coming in. We are also seeing new kinds of design styles. AI is a completely new area. Are there new demands coming in for verification tools or methodologies that we haven’t seen before and are they going to be driving tools in different directions from the past?

Graykowski: As an interconnect company, we see the big players are now doing multi-die interconnects. That is definitely going to bring new verification challenges. It’s not just functionality. You have to worry about the characteristics of the timing and all the signal integrity going across these dies. To make sure the clocking on this side of this particular die is connected to this one properly, everything’s synced up, and you get all the intended performance. There’s going to be a lot of need coming from that perspective as we go into these larger and larger systems.

Ganguly: I see two challenges. Superficially, there’s multi-die stuff, and this is very simple from a verification standpoint. It’s a bunch of logic that happens to be in multiple chiplets, connected with high-speed interconnect circuit. As a verification model, I don’t care whether they’re the same die, or if it’s a bus. From a challenge point of view, the first one is relatively simple. Now you’re going to have IPs that are able to do much deeper access into memory of other IP, just because the connection between them is much faster. Essentially, you’ll have a peripheral that was hanging off a PCIe chain, or hanging off a USB. That’s not the case anymore. It’s talking on a much higher bandwidth bus, and instead of accessing just the local memory, they’re now having access to the CPU level 3 cache. This class of optimization people will do for silicon performance, which will show up in verification, is going to be a very interesting challenge. Another more interesting one involves people who will be aggregating multiple dies from multiple companies into one package. Even if these dies are individually tested, how do I probe a value on a substrate that has gone into a package before I put it into a $15, or $20, or $30 package. This is a much bigger challenge.

Thompson: DFT for through-hole vias is a new thing, a new idea. We are going to have to see an awful lot more of that. And if you take a look at small startups, they are coming up with new ideas, but nobody talks about them. I was looking at a company working in the area of 3D packaging, but they are not talking about verification anywhere. I can see a train coming.

Lapides: I did my first flip chip 35 years ago. One of the things that we had in the defense industry was a very rigorous design methodology, specifications, and verification for those specifications. The silicon industry has always been about quicker returns than in the defense industry. They look down their noses at all the bureaucracy in the defense industry. With very small exceptions, things work there. It may take a little bit longer, and it may take more people, but there’s a very rigorous methodology that they follow to make sure things from different vendors are working together.

Thompson: But Apple has a clock called Christmas. The defense industry doesn’t have that.

Chat with us

Hi there! How can I help you?