Zephyrnet Logo

Fundamental Shifts In IC Manufacturing Processes

Date:

High chip value and 3D packaging are changing where and how tests are performed, tightening design-for-reliability and accelerating the shift of tools from lab to fab

Heterogeneous integration and more domain-specific designs are causing a string of disruptions for chip manufacturers, up-ending proven fab processes and methodologies, extending the time it takes to manufacture a chip, and ultimately driving up costs everywhere. Unlike in the past, when each new node included a tightly choreographed progression of vetted and proven process steps, fabs and assembly houses now must weigh a variety of process options that impact which markets they serve, what equipment they purchase, and who they partner with.

Test, inspection, and metrology vendors all are being called upon to do more, and to do it faster. But as designs become more complex, and as reliability concerns increase across various end markets, huge challenges arise across process flows. In some cases, there are more test and inspection points. In others, it’s not always clear at what stage different technologies should be deployed. Depending on package type, for example, probes may not be able to contact all portions of a heterogeneous design.

“It used to be where all the value was in the front end [of chip manufacturing],” said John Kibarian, CEO of PDF Solutions. “You would test for wafer sort, and then packaging was 99% yield, final test 99% yield, and then you were done. Now, there’s so much value add in that packaging step — because you’re putting a lot of other components together, including some very valuable components in many cases, and you’ve got many more test insertion points, final test, post-burn-in, system-level test — that wafer sort is in the middle of the flow. In the past, wafer sort was simply ‘go/no-go,’ but now that information is valuable downstream.”

The collective value of multiple dies in a package, and the recognition that one bad chip or interconnect can turn a valuable module into scrap, is infiltrating every facet of the manufacturing flow. As the value of the whole module or chip goes up, so does the need to ensure the functionality of every component and process.

“We’re seeing more importance of high-quality testing at probe,” said Seth Prentice, general manager for precision power and analog at Teradyne. “If you have one device within a module that fails, your yield goes down at final test and it gets far more expensive. There are multiple dies, a processor with accelerators, DC-to-DC…Any failure is far more expensive.”

Preventing failures is getting harder, though. Chip manufacturers emphasize differentiation by domain and within domains, which results in smaller production runs. That is compounded by an almost universal demand for faster time to market, which leaves less time to fine-tune manufacturing and assembly processes. In fact, one of the key drivers behind chiplets is the ability to use pre-verified and pre-tested components using a proven interconnect strategy, where yield can be more tightly controlled. But the semiconductor industry still has a long way to go before it becomes feasible for most chipmakers to be able to pick chiplets from a menu and know the system will work as expected. In the meantime, chipmakers must wrestle with a variety of technological and business shifts, and conflicting demands that affect all of them differently.

That adds more pressure to solve issues earlier in the flow. “There’s growing pressure to solve everything at the R&D level and the pilot lines before they release something to production,” said Hector Lara, director and business manager at Bruker. “Fabs don’t want to go through expensive production and then try to reduce test costs from 7% to 2%. Once they’re in production, they want to already be at 2% [of the total manufacturing cost]. That’s a huge challenge, because at the same time, they’re trying to increase reliability. So there’s more pressure on the R&D teams, and the pilot lines go a little longer.”

Others report similar shifts. “During the R&D or yield ramp phases, early adoption of multi-layer sampling provides early learning to enable reduction of new and recurring defect mechanisms,” said Andrew Cross, process control solutions director at KLA. “With increased adoption of EUV single-exposure patterning and the introduction of EUV multi-patterning methods for BEOL layers, high-sensitivity inspection with full wafer coverage is essential for capturing critical types and sizes of defects, while providing the die and wafer level signature information required to solve challenging process issues.”

This is a non-trivial challenge on multiple levels. In advanced-node designs, dielectrics and metals are becoming thinner, as well as new materials such as ruthenium and cobalt on-chip or rhodium in the package, which can impact inspection approaches. Shrinking dimensions and new applications also make it difficult to determine whether an aberration caused by process variation will become a real defect (i.e., cause device failure) or whether it will remain latent throughout its projected lifetime. This is particularly worrisome with logic chips in automotive applications, where the same design may be used under very different environmental conditions.

On the inspection side, reflectivity can vary significantly by material and by the various heights of different components. “The number of permutations is mind-boggling,” said Subodh Kulkarni, CEO of CyberOptics. “And it’s not just the number of layers. It’s also passive components that are coming in. Interposers are creating another flavor. Everyone is mixing and matching, and every company seems to be doing its own thing. They even have their own terminology.”

The result is more steps, but not always in the same order or at the same time. “If you go back even three years ago, the bumping companies or the interposer companies didn’t really think about inspection at that time,” Kulkarni said. “They were looking at what could go wrong. Now, they are saying nothing can go wrong, and they are starting to see the value of doing more periodic inspections closer to the process steps, and then as a final verification. So there are more steps, and certainly more variants of what is being done at each step.”

Do more faster
What becomes apparent in advanced chips and packages are the conflicting goals by different groups. There is a continual drive to reduce costs, to simplify designs, and to improve reliability. At the same time, more customization is being added into designs, making them increasingly complex, and making it harder to catch every possible defect..

This is evident on the 5G chip side, where test is becoming much more difficult. “Testing is already very complicated, and heterogeneous integration is definitely not making it any easier,” said Adrian Kwan, senior business development manager at Advantest. “The amount of time it takes to do complex scans is increasing, which is creating a challenge for the whole industry. The challenge is to keep costs low by reducing the test time and still provide sufficient test coverage. This is in the works, but test time today is still 3X longer than before. So we are working on improving the process, how it is being tested, and we are exploring creative ways to do that.”

While companies work on locating value-adding test steps at ideal locations, they are increasing parallelization efforts whenever possible. “You need higher density of instrumentation, or a broader set of instrumentation, so you can continue to test with the same level of parallelism, the same number of devices, in order to continue to drive the economics,” said Dennis Keough, senior product manager for automotive test at Teradyne.

On the flip side, because there is accelerated focus on reliability and the collective value of components in heterogeneous integration, new opportunities are opening up for equipment that sat on the sidelines for years because it was too slow. This is particularly apparent with technologies like X-ray inspection, for example, which has been used relatively sparingly in production. The big driver for this type of equipment is advanced packaging and 3D-ICs, because there is no other way to peer into the package/module once it is sealed.

“Engineers want to know the composition of each layer in the Si/SiGe nanosheet stack,” said Paul Ryan, vice president and general manager of Bruker’s X-ray business. “As we go down to 3nm, XRF fills a kind of niche application that optics really struggles with. It also helps that a lot of these metrologies can be done on larger areas. We’re not stuck with a 50µm box, which has always been a problem. If the application calls for a pure thickness measurement of a of single layer or a couple layers, optics tends to go for it. But there is additional information that X-ray can add, like the strain state within phase-change memory stacks. There was a lot of strain engineering that went on way back when X ray was used extensively just to monitor strain states (in source/drain regions of FETs). With graded layers, you can really drill down into, ‘Is it in-plane or out-of-plane stress? Is it relaxed? Is it fully strained?’ There’s a huge amount of information.”


Fig. 1. X-ray fluorescence flags defective bumps while also tracking the concentration of silver in SnAg solder bumps. Source: Bruker

For nearly two decades, the biggest roadblock to semiconductor progress was lithography. Production EUV scanners arrived several nodes later than expected, but the silver lining was that it forced the entire industry to get comfortable with multi-patterning. With the introduction of EUV tools, high-NA EUV and multi-patterning, lithography is no longer the bottleneck and scaling continues. In a similar manner, EUV photomask, using inverse lithography technology to allow for curvilinear shapes, greatly increase the density and the accuracy of what gets printed on a die.

Now that the lithography challenges are solved — or at least being addressed — the industry also must focus increased attention to a host of integration challenges, particularly ensuring the reliability of chips that fully utilize the Z axis. Some of the most advanced chips resemble miniature cities — with pillars, vias of different heights, 3D transistors, passives, and various memories and accelerators of different sizes, all densely packed together.

Better data, better integration of data
The solution to many of these problems lies in building the infrastructure to better leverage the collected data.. Every insertion point of every process creates data. With metrology images, this can quickly balloon into terabytes of data. While some of this can be trimmed, such as using machine learning to mine what’s important and discard the rest, the real value is in integrating data and leveraging it to improve yield and reliability.

“If I have knowledge about my wafer-level test or design characterization information, I may want to use this in the field to understand trends,” said Steve Pateras, senior director of marketing and business development at Synopsys. “And likewise, if I get failure information, such as degradation in signal paths and increased delay over time, I want to be able to correlate that with my original wafer data, or even drive that back into design. There’s definitely a desire to feed data forward and backward. That works today if you’re a fully integrated company and you’re designing your own chips. For other companies, we’re going to have to figure out how to share some of that data.”

One thing that can help in that regard is data layering. “When people talk about a data lake, the data is either there or it isn’t,” said Mike McIntyre, director of software product management at Onto Innovation. “But when we when you come into this system with an organized data repository, we can layer that data on top of each other. In other words, holding onto the specific defect type at a specific defect location a die has has a certain lifetime, generally speaking. We don’t delete that data, but we archive it. We hold that layer of information of how many defects were on that die, or on that wafer, for a longer period of time. And then you then you further propagate that up from a die to a wafer to a lot, maybe to a technology, and that that data gets layered through it. Today, if you look at the supply chain for semiconductors, just the manufacturing is still 120 to 160 days. When you then add the board assembly and the board test, and then put it together in a server, you’re talking maybe 12 to 18 months before a chip from the start of its process is sold into the field.”

One of the big advantages to organizing data into a repository is the archival information can be retrieved years later, which is particularly important where companies involved in a project are either acquired or fail. But data also changes over time, and so do the tools used to organize it. “Bringing data back from an Oracle 5 database and putting it into an Oracle 19 database is not an easy task,” McIntyre noted.

DFT/DFY/DFD
All of these changes and challenges have an impact much further forward in the flow, as well. For decades, the fabs could fix many basic issues, such as layout violations or power problems by applying well-constructed design rules, which relied on previous history and a lot of guard-banding. Those design rules continue to grow in complexity at each new node, but there is no longer enough margin available on the manufacturing side to fix the problems in production, because guard-banding at the most advanced nodes reduces performance and increases the power.

So the fabs passed the problem to the left in the flow, before the GDS II code is even sent to the fab. As a result, EDA tools need to be much more tightly integrated into the process to make they work. But design for test, for yield, and increasingly for data consistency, are facing the same constraints and challenges as manufacturing, because these methodologies essentially have become an extension of the fab processes. And while they are important elements of what is known as silicon lifecycle management — which spans from initial architecture through manufacturing and into the field — this requires understanding of the nuances and choreographing of different process steps before the chip is even built.

It also requires design teams to be on the lookout for new issues that were never addressed in the past. “We’ve expanded our software capability extensively to include things like the kind of advanced bridging neighborhood faults — the sort of things that could potentially crop up in manufacturing but maybe haven’t been detected in the past,” said Lee Harrison, automotive IC test solutions manager at Siemens EDA. “There is extensive capability on the manufacturing test side, but that only ensures these devices go out the door as defect-free as possible. Then they go into whatever equipment they’re being built into, and we take over with system test and embedded analytics. Within system test, we have the capability to re-run a kind of limited scope of the manufacturing tests. The quality isn’t quite as high as pure manufacturing tests, but it’s pretty good. So you’ve got good coverage of the manufacturing defects, while a chip is in a system out in the field, and we’ve got embedded analytics technology, which can look at everything from bad software to cybersecurity attacks, and anything else strange going on in the device.”

The road ahead
Still, keeping up with all the changes on a chip or in manufacturing are only part of the challenge. Advanced-node chips now are being used in safety-critical applications, such as cars and drones. In applications such as data centers, those chips are being packaged with other chips, often using a mix of nodes. In all cases, there is growing demand for high reliability that predicts potential failures throughout a chip’s lifetime, regardless of the end application.

That requires a rethinking of every process step in the fab and in the assembly house. “Test engineers, for the most part, used to just look at stuck-at test, which is a very localized problem,” said Marc Hutner, senior director of product marketing at proteanTecs. “We’re now getting to the point where you can get alerts and insights. And as more and more integration happens, you can start to see all sorts of new relationships. As we gather data from within a section of the die, and bubble that up to the die level, you can start to look at this on multiple levels, as well as from an advanced packaging standpoint. So rather than just a ‘stuck-at’ pass/fail on a link, you can understand the health of a link. And if you have a little bump or divot in your path, you can see what the impact is. So if there’s something you didn’t see before you send a chip out the door in the past, you now can determine whether you have to worry about that.”

Or to put this all into much simpler terms, when you’re driving at 70 miles per hour and there is an object or person in the road, you expect your vehicle to respond appropriately and predictably. That means the chips in the vehicle have to function within parameters set by the manufacturer, no matter how complicated the design, or how difficult the test or inspection — and no matter how much the vehicle costs.

— Laura Peters contributed to this report.

Related Stories

Unknowns Driving Up The Cost Of Auto IC Reliability

Using Manufacturing Data To Boost Reliability

Auto Chipmakers Dig Down To 10ppb

HBM, Nanosheet FETs Drive X-ray Fab Use

The post Fundamental Shifts In IC Manufacturing Processes appeared first on Semiconductor Engineering.

spot_img

Latest Intelligence

spot_img