Connect with us

Semiconductor

Kyoto’s new KP-H photodiode achieves 40GHz bandwidth

Avatar

Published

on

29 June 2020

Japan’s Kyoto Semiconductor Co Ltd has developed the KP-H KPDEH12L-CC1C lens-integrated chip-on-carrier indium gallium arsenide (InGaAs) high-speed photodiode to support 400Gbps transmission systems that use PAM4 (Pulse Amplitude Modulation 4) both within and between data centers.

Currently, Kyoto has achieved transmission speeds of mainly 100Gbps by bundling 4 lanes of 25Gbps. However, there are growing demands in the market for 400-800Gbps transmission speeds. The Institute of Electrical and Electronics Engineers (IEEE) set the PAM4 standard, which corresponds with 4-bit signal to one modulation. The transmission speed per photodiode reaches 50Gps (= 400Gps/4 lanes/2 (PAM4)). The transmission bandwidth required for the photodiode to achieve this speed is 35-40GHz.

With the introduction of the new photodiode, Kyoto is supporting the increasing speeds and capacity requirements for transmission systems in 5G networks and beyond.

Picture: Mounting of the KPDEH12L-CC1C photodiode (with integrated condenser lens) on the carrier.

The 0.6mm x 0.48mm x 0.25mm size of the carrier on which the photodiode is mounted, and the width and length of the electrode pattern formed on the surface of the board (with little attenuation at high frequencies), are optimized using electromagnetic simulation. As a result, Kyoto claims that it has achieved an industry-leading 400Gbps and 40GHz as a frequency band with an integrated transimpedance amplifier. The KP-H photodiode has passed Telcordia GR-468-Core qualification (the standard reliability test for communication equipment).

As well as being mounted on a carrier that is optimally designed to achieve high frequency, a condenser lens is integrated on the backside of the KPDEH12L-CC1C photodiode, allowing incoming light to collect in the light absorption area, and making it easy to align the optical fiber with the photodiode. The photodiode chip is mounted on a carrier twice as big as the chip itself.

Mass production of the KP-H KPDEH12L-CC1C photodiode is scheduled to start in November.

Tags: PIN photodiode

Visit: www.kyosemi.co.jp/en

Source: http://www.semiconductor-today.com/news_items/2020/jun/kyoto-290620.shtml

Semiconductor

Von Neumann Is Struggling

Avatar

Published

on

In an era dominated by machine learning, the von Neumann architecture is struggling to stay relevant.

The world has changed from being control-centric to one that is data-centric, pushing processor architectures to evolve. Venture money is flooding into domain-specific architectures (DSA), but traditional processors also are evolving. For many markets, they continue to provide an effective solution.

The von Neumann architecture for general-purpose computing was first described in 1945 and stood the test of time until the turn of the Millennium. The paper John von Neumann wrote described an architecture where data and programs are both stored in the same address space of a computer’s memory — even though it was actually an invention of J. Presper Eckert and John Mauchly.

A couple reasons explain the architecture’s success. First, it is Turing Complete, which means that given enough memory and enough time, it can complete any mathematical task. Today we don’t think much about this. But back in the early days of computing, the notion of a single machine that could perform any programmed task was a breakthrough. Passing this test relies on it having random access to memory.

Second, it was scalable. Moore’s Law provided the fuel behind it. Memory could be expanded, the width of the data could be enlarged, the speed at which it could do computations increased. There was little need to modify the architecture or the programming model associated with it.

Small changes were made to the von Neumann architecture, such as the Harvard architecture that separated the data and program buses. This improved memory bandwidth and allowed these operations to be performed in parallel. This initially was adopted in digital signal processors, but later became used in most computer architectures. At this point, some people thought that all functionality would migrate to software, which would mean an end to custom hardware design.

End of an era
Scalability slowed around 2000. Then, Dennard scaling reared its head in 2007 and power consumption became a limiter. While the industry didn’t recognize it at the time, that was the biggest inflection point in the industry to date. It was the end of instruction-level parallelism. At first, it seemed as if the solution was to add additional processors. This tactic managed to delay the inevitable, but it was just a temporary fix.

“One of the problems is that CPUs are not really good at anything,” says Michael Frank, fellow and system architect at Arteris IP. “CPUs are good at processing a single thread that has a lot of decisions in it. That is why you have branch predictors, and they have been the subject of research for many years.”

But in an era of rapid change, any design that does not expect the unexpected may be severely limited. “Von Neumann architectures tend to be very flexible and programmable, which is a key strength, especially in the rapidly changing world of machine learning,” says Matthew Mattina, distinguished engineer and senior director for Arm’s Machine Learning Research Lab. “However, this flexibility comes at a cost in terms of power and peak performance. The challenge is to design a programmable CPU, or accelerator, in a way such that you maintain ‘enough’ programmability while achieving higher performance and lower power. Large vector lengths are one example. You’re amortizing the power cost of the standard fetch/decode portions of a CPU pipeline, while getting more work done in a single operation.”

Fig. 1: The von Neumann architecture, first described in the 1940s, has been the mainstay of computing up until the 2000s. Data and programs are both stored in the same address space of a computer’s memory. Source: Semiconductor Engineering

Accelerators provide a compromise. “Accelerators, serve two areas,” says Arteris’ Frank. “One is where you have a lot of data moving around, where the CPU is not good at processing it. Here we see vector extensions going wider. There are also a lot of operations that are very specific. If you look at neural networks, where you have non-linear thresholding and you have huge matrix multiplies, doing this with a CPU is inefficient. So people try to move the workload closer to memory, or into specialized function units.”

To make things even more complicated, the nature of data has changed. More of it is temporal. The temporal aspects of data were first seen with audio and video. But even a couple decades ago, a single computer could keep up with the relatively slow data rates of audio. Video has presented much greater challenges, both for processing and memory.

The memory bottleneck
Memory access is expensive in terms of time and energy. Caches address this problem by exploiting data locality. “Most silicon designs use various technologies for reducing power consumption,” says Anoop Saha, market development manager for Siemens EDA. “Improving memory accesses is one of the biggest bang-for-the-buck architecture innovations for reducing overall system-level power consumption. That is because an off-chip DRAM access consumes almost a thousand times more power than a 32-bit floating point multiply operation.”

Ever-more complex caching schemes have been developed since then in an attempt to bring the memory closer to the processor. But accessing a cache still consumes 200X the power compared with the same variable being stored in a register.

Put simply, memory has become the limiter. “For some applications, memory bandwidth is limiting growth,” says Ravi Subramanian, vice president and general manager for Siemens EDA. “One of the key reasons for the growth of specialized processors, as well as in-memory (or near-memory) computer architectures, is to directly address the limitations of traditional von Neumann architectures. This is especially the case when so much energy is spent moving data between processors and memory versus energy spent on actual compute.”

The rapid emergence of AI/ML is forcing change in the memory architecture. “The processors may be custom, but you need the SRAM to be local,” says Ron Lowman, strategic marketing manager for IoT at Synopsys. “For AI applications, you want to execute and store as much of the weights and coefficients as close to the MACs as possible. That is what eats up the power consumption. Multi-port memories are very popular for AI. This means you can parallelize reads and writes when you are doing the math. That can cut the power in half.”

This kind of change comes with a large penalty. “The challenge is that in the past, people had a nice abstract model for thinking about computing systems,” says Steven Woo, fellow and distinguished inventor at Rambus. “They never really had to think about memory. It came along for free and the programming model just made it such that when you did references to memory, it just happened. You never had to be explicit about what you were doing. When new kinds of memories enter the equation, you have to get rid of the very abstract view that we used to have to make them really useful.”

This needs to change, however. “Programmers will have to become more aware of what the memory hierarchy looks like,” Woo says. “It is still early days, and the industry has not settled on a particular kind of model, but there is a general understanding that in order make it useful, you have to increase the understanding about what is under the hood. Some of the programming models, like persistent memory (PMEM), call on the user to understand where data is, and to think about how to move it, and ensure that the data is in the place that it needs to be.”

At the heart of AI applications is the multiply accumulate function (MAC), or dot product operation. This takes two numbers, multiplies them together and adds the result to an accumulator. The numbers are fetched from and stored to memory. Those operations are repeated many times and account for the vast majority of the time and power consumed by both learning and inferencing.

The memory needs of AI are different from those of GPUs or CPUs. “It is important to optimize the algorithm to improve data locality so as to minimize data movement,” says Siemens’ Saha. “These choices are dependent on the specific workloads that the chip is designed to run. For example, image processing accelerators use line buffers (which works on only a small sample of an image at a time), whereas a neural network accelerator uses double buffer memories (as they will need to operate on the image multiple times).”

For example, with an AI accelerator that processes layer-by-layer, it is possible to anticipate what memory contents will be required ahead of time. “While layer N is being processed, the weights for layer N+1 are brought in from DRAM, in the background, during computation of layer N,” explains Geoff Tate, CEO of Flex Logix. “So the DRAM transfer time rarely stalls compute, even with just a single DRAM. When layer N compute is done, the weights for layer N+1 are moved in a couple microseconds from a cache memory to a memory that is directly adjacent to the MACs. When the next layer is computed, the weights used for every MAC are brought in from SRAM located directly adjacent to each cluster of MACs, so the computation access of weights is very low power and very fast.”

Domain-specific architectures often come with new languages and programming frameworks. “These often create new tiers of memory and ways to cache it or move the data so that it is closer to where it needs to be,” says Rambus’ Woo. “It adds a dimension that most of the industry is not used to. We have not really been taught that kind of thing in school and it is not something that the industry has decades of experience with, so it is not ingrained in the programmers.”

Times are changing
But that may not be enough. The world is slowly becoming more conscious of the impacts of using arbitrary amounts of energy, and the ultimate damage we are doing to our environment. The entire tech industry can and must do better.

Academics have been looking at the human brain for inspiration, noting that pulsing networks are closer to the way the brain works than large matrix manipulations against a bank of stored weights, which are at the heart of systems today. Pulses fire when something important changes and does not require completely new images, or other sensor data, every time the equivalent of a clock fires. Early work shows that these approaches can be 20X to 50X more power-efficient.

Mixed-signal solutions are a strong candidate. “There are designs that are closer to mixed-signal designs that are looking at doing computation directly within the memories,” says Dave Pursley, product management director at Cadence. “They are focusing on the elimination of data movement altogether. Even if you read a lot of the academic papers, so much of the research used to be about how do you reduce the amount of computation and now we are in a phase where we are looking at the reduction in data movement or improve locality so that you don’t need such massive amounts of storage and those very costly memory accesses in terms of power.”

New computation concepts are important. “The idea is that these things that can perform multiply-accumulates for fully connected neural network layers in a single timestep,” explained Geoffrey Burr, principal RSM at IBM Research. “What would otherwise take a million clocks on a series of processors, you can do that in the analog domain, using the underlying physics at the location of the data. That has enough seriously interesting aspects to it in time and energy that it might go someplace.”

Analog may have another significant advantage over the digital systems being used today. Object detection systems in today’s automobiles often cannot handle the unexpected. “Neural networks are fragile,” said Dario Gil, vice president of AI and IBM Q, during a panel discussion at the Design Automation Conference in 2018. “If you have seen the emergence of adversarial networks and how you can inject noise into the system to fool it into classifying an image, or fooling it into how it detects language of a transcription, this tells you the fragility that is inherent in these systems. You can go from something looking like a bus, and after noise injection it says it a zebra. You can poison neural networks, and they are subject to all sorts of attacks.”

Digital fails, analog degrades. Whether that is true for analog neural networks, and whether they can be more trustworthy, remains to be seen.

Conclusion
There always will be an element of control in every system we create. As a result, the von Neumann architecture is not going away. It is the most general-purpose computer possible, and that makes it indispensable. At the same time, a lot of the heavy computational lifting is moving to non-von Neumann architectures. Accelerators and custom cores can do a much better job with significantly less energy. More optimized memory architectures are also providing significant gains.

Still, that is just one design tradeoff. For devices that cannot have dedicated cores for every function they are likely to perform, there are some middle ground compromises, and the market for these remains robust. The other problem is that the programming model associated with the von Neumann architecture is so ingrained that it will take a long time before there are enough programmers who can write software for new architectures.

Source: https://semiengineering.com/von-neumann-is-struggling/

Continue Reading

Semiconductor

Learning properties of ordered and disordered materials from multi-fidelity data

Avatar

Published

on

Home

TECHNICAL PAPERS

New ML approach to predict key property of materials

popularity

Source: Chen, C., Zuo, Y., Ye, W. et al. Learning properties of ordered and disordered materials from multi-fidelity data. Nat Comput Sci 1, 46–53 (2021). https://doi.org/10.1038/s43588-020-00002-x

Abstract:

“Predicting the properties of a material from the arrangement of its atoms is a fundamental goal in materials science. While machine learning has emerged in recent years as a new paradigm to provide rapid predictions of materials properties, their practical utility is limited by the scarcity of high-fidelity data. Here, we develop multi-fidelity graph networks as a universal approach to achieve accurate predictions of materials properties with small data sizes. As a proof of concept, we show that the inclusion of low-fidelity Perdew–Burke–Ernzerhof band gaps greatly enhances the resolution of latent structural features in materials graphs, leading to a 22–45% decrease in the mean absolute errors of experimental band gap predictions. We further demonstrate that learned elemental embeddings in materials graph networks provide a natural approach to model disorder in materials, addressing a fundamental gap in the computational prediction of materials properties.”

Find technical paper here.

Source: https://semiengineering.com/learning-properties-of-ordered-and-disordered-materials-from-multi-fidelity-data/

Continue Reading

Semiconductor

Creating an Acceleration Platform for Vitis Part One: Creating the Hardware Project for the Acceleration Platform in Vivado

Avatar

Published

on

This blog entry is part one of our Simple Guide to Creating an Acceleration Platform for Vitis™. In this entry we will discuss how to enable your platform in the Vivado® Design Suite so that it is acceleration ready in Vitis.

Your platform can be an already established Vivado mature design that you would like to enhance to give you the flexiblity to accelerate software functions. Alternatively the platform can be a simple Vivado design that just has the topology needed for acceleration. The point being that the Vivado design used in the platform does not need to be a one-off design. It should be more organic, and change when your design needs to change.

  • The different Platform Types are discussed here
  • The steps to create a platform are discussed here

This is part one of the Simple Guide to Creating an Acceleration Platform for Vitis. You can find the other parts at the links below:

Part Two:  Creating the software project for the Acceleration Platform in PetaLinux 

Part Three: Packaging the Accelerated Platform in Vitis 

Part Four: Testing the Custom Acceleration Platform in Vitis 

Introduction:

When we are accelerating a software component, we are offloading this from the CPU to our accelerated IP in the programmable logic. The Vitis tool will handle adding a datamover between the accelerated IP and the CPU. However, it does need some input from the user. It needs to know what interface to use to connect from the SoC, and the Accelerated IP. It also needs to know which clocks/reset are available to use.

Also, since we will be sending chunks of data between the CPU and accelerated IP, we need an interrupt. And that’s basically it… well, there are a few other things we need to tell the Vitis Tool, but we will cover this later.

So, lets get going. Launch Vivado and create your project. I am using a ZCU104 board. However, the steps below are common for all Zynq® UltraScale™ boards, whether it is a development board or a custom board.

Creating the Hardware Design:

Create the Block Design (BD). The name here will be the same name as we will use to name our platform.

stephenm_0-1597226289172.png

Add the Zynq UltraScale Processor Subsystem IP block from the IP catalog. If using a development board, you should avail of the Block Automation feature.

stephenm_1-1597157023766.png

I have changed the default interfaces to include just the LPD:

stephenm_5-1597157692482.png

In our simple platform, we can just create two clocks. These are the clocks that will be used in Vitis.

We can add the Clocking Wizard from the IP catalog:

stephenm_6-1597157734670.png

By default the reset is active high, and our reset source (on the Zynq UltraScale device) is active low. So, we can bear this in mind when in the clocking config.

I have added three output clocks; 100Mhz, 150Mhz, and 300Mhz:

stephenm_8-1597158401526.png

Also, set the reset polarity to active low:

stephenm_4-1597157488686.png

We need to provide a synchronous reset for each clock. We have three clocks, so we need to add three Processor System Reset IP cores from the IP catalog:

stephenm_10-1597158764315.png

Next, we need to add the interrupts. Here, we can add an AXI Interrupt Controller from the IP catalog. Users can avail of the Run Connection Automation feature that is available in the IP Integrator to handle the AXI connections.

Use the 100Mhz clock:

stephenm_11-1597158830387.png

In the AXI Interrupt Controller, set the Interrupt Output Connection to Single and connect this to the pl_ps_irq on the Zynq UltraScale IP:

stephenm_12-1597158905002.png

This is all we need for a basic hardware platform.

Now we just need to set the metadata to tell Vitis about our hardware via the Platform (PFM) properties.

Adding the PFM properties:

The PFM properties are needed to pass the metadata to Vitis.

Vitis extracts this data to determine what interfaces, clocks, and interrupts can be used to add the accelerated portion to the existing platform.

Platform Name:

First, we need to give our Platform a name:

plat_interface.PNG

enable_plat.PNG

Highlight the platform, and set the properties as shown below:

plat_properties.PNG

Or, from TCL:

set_property PFM_NAME {<vendor>:<board>:<name>:<revision>} [get_files [current_bd_design].bd]
set_property PFM_NAME {xilinx:zcu104:zcu104_base:1.0} [get_files [current_bd_design].bd]

Once this is done, you will see a new Platform tab. Here all of the clocks, interfaces, and interrupts in the entire design will be visible.  We need to filter what resources can be used in Vitis.

Enable Clocks:

Right click on the clock, and select enable:

stephenm_0-1597159599898.png

Repeat for clk_out3

Clock Properties:

Select the Options tab:

stephenm_1-1597159801228.png

Note: The clock id must start at 0 and increment so change this here. We must also specify a default.

The default here is the default clock used in Vitis:

stephenm_2-1597159949819.png

Set the index for clk_out3:

stephenm_0-1597160065760.png

Enable Interfaces:

This can be any interfaces that are available in our block design, for example the interfaces on the Zynq UltraScale device or the interfaces on the AXI interconnect.

In this case, I will just add the interfaces on the Zynq UltraScale device. 

stephenm_2-1597160340667.png

Enable Interrupts:
set_property PFM.IRQ {intr {id 0 range 31}} [get_bd_cells /axi_intc_0]

Project Properties:

The Vitis IDE is a unified tool that supports a lot of different flows such as Data Center, Acceleration, or Embedded. We need to pass this intent to the Vitis tool.

If we intend to create an Embedded design, we need to specify this. In our case we intend to use Vitis to accelerate. This needs to be specified because Vitis needs to tell the downstream tools how to handle the platform. 

These properties can be seen here:

set_property platform.default_output_type "sd_card" [current_project]
set_property platform.design_intent.embedded "true" [current_project]
set_property platform.design_intent.server_managed "false" [current_project]
set_property platform.design_intent.external_host "false" [current_project]
set_property platform.design_intent.datacenter "false" [current_project]

Creating the XSA:

Complete the following tasks to create the XSA:

  • Generate the Block Design
  • Create the HDL wrapper
  • Generate the Bitstream
  • Select File -> Export -> Export Hardware
    • Select Expandable -> Pre Synthesis, and include Bitstream

User can enter the details here:

stephenm_0-1597228398976.png

And that’s it.

Source: https://forums.xilinx.com/t5/Design-and-Debug-Techniques-Blog/Creating-an-Acceleration-Platform-for-Vitis-Part-One-Creating/ba-p/1138208

Continue Reading

Semiconductor

Achieving Physical Reliability Of Electronics With Digital Design

Avatar

Published

on

By John Parry and G.A. (Wendy) Luiten

With today’s powerful computational resources, digital design is increasingly used earlier in the design cycle to predict zero-hour nominal performance and to assess reliability. The methodology presented in this article uses a combination of simulation and testing to assess design performance, providing more reliability and increased productivity.

Reliability is “the probability that a system will perform its intended function without failure, under stated conditions, for a stated period of time.” The first part of this definition focuses on product performance as intended to function without failure. The second part addresses usage aspects—under what conditions the product will be used. The third part addresses time—how long will the product be operating.


Figure 1: System development V-diagram.

The flow of digitally designing for performance is depicted by the V-model (Figure 1) – requirements flow down, and the capabilities flow up. Business and marketing requirements flow down for the system, followed by the subsystem, and the components in the left hand side of the V. After design, the component capability to fulfill its sub-function without failure is verified, including the subsystem and the system. Finally, the full system is validated against business and marketing expectations.

Designing for Reliability in Three Parts
Digital design improves and speeds the verification step by calculating whether the specified system, subsystem, or component inputs will result in the required output. Digital design can also be used to guide architecture and design choices. For electronics cooling design and analysis, 3D computational fluid dynamics (CFD) software constructs a thermal model of the system at the concept stage, before design data is committed into the electronic design automation (EDA) and/or mechanical CAD (MCAD) systems. The model is then elaborated with data imported from the mechanical and electrical design flows to create a digital twin of the thermal performance of the product, which is then used for verification and analyses.

The second part of designing for reliability focuses on conditions – incorporating use cases for different stages of the systems’ life cycle, including transport, use preparation, first use, normal use, and end-of-use scenarios. The product should withstand normal transport conditions: drops, vibrations, temperature extremes, and maintain performance with handling mistakes. Different loading conditions will occur in varying temperature and humidity environments during normal use. And after end-of-use, a product should be easily recycled to avoid environmental damage. These use cases represent scenarios beyond typical, normal use conditions outside of a lab environment. Digital design simulates specific steps in the life cycle, for instance, drop and vibration tests to mimic transport conditions, and “what-if” scenarios, simulating worse-case environmental conditions.

The third part of the reliability definition is about the time span that a product is expected to perform its intended function without failure. This is measured by the failure rate, defined simply as the proportion of the running population that fails within a certain time. If we start with a population of 100 running units, and we have a constant failure rate of 10%, then at t = 1, 90 units (90% of 100) are still running and at t = 2, 81 (90% x 90) are running.


Figure 2: Bathtub curve showing the rates of failure over time.

In time, the failure rate changes. The hardware product performance can be illustrated by a bathtub curve (Figure 2). The first phase, infancy, has a decreasing failure rate as kinks are worked out of an immature design and its production. Example root causes of infancy failure include manufacturing issues from part tolerances, transport or storage conditions, installation, or start up. This stage confirms that the manufactured product performs as designed. Since this is from the business perspective, note that failures do not refer to a product’s single instance, but to the population that the business produces. Temperature affects all parts of the bathtub curve, so the thermal performance of the system should be checked and compared to the simulation model at this stage.

The next phase is normal life where the failure rate flattens the bathtub curve. Random failures from various sources of overstress combine as a constant aggregate failure rate; overstress is defined as excursions outside known safe-operating limitations. In the third part of the curve, the failure rate increases with the product wearing out over time with use.

Failure and Stages of Maturity
The V-diagram shows that reliability is ensured by adherence of the manufactured product requirements. Parts that do not meet these requirements are considered defective, with assumed early performance failure. Typically, higher levels are an aggregation of many lower levels, for example, an electronics assembly comprising multiple boards, with each board containing multiple components and larger amount of solder joints. This also means that lower levels need progressively lower failure rates to ensure reliability at higher levels. In high-reliability environments, failure rates are expressed in terms of parts per million (ppm) and process capability index (Cpk).

In the electronics industry supply chain, the maximum acceptable failure rates of electronic assemblies range from a Cpk of 1.0, corresponding to 2,700 ppm falling outside either the upper or lower specification limits. Large suppliers typically work from a Cpk of 1.33 (60 ppm) to a Cpk of 1.67 for critical parts (<1 ppm). In automotive applications, increasing electronics subsystems (particularly for safety) are driving the supply chain to achieve ever-lower defect rates, now approaching 1 ppm at the level of individual components.

A reliability-capable organization learns from experiences and operates proactively. The IEEE 1624-2008 Guide for organizational Reliability Capability defines five stages in a reliability capability maturity model (CMM) that varies from stage 1: purely reactive to stage 5: proactive. Table 1 shows an extract from the matrix that covers reliability analysis and testing beginning with stage 2.


Table 1: IEEE1624 capability maturity matrix excerpt on reliability analysis and testing.

For a complex design, the multitude of failure conditions and use cases results in many potential failure conditions – costly and time consuming to test in hardware. Testing based on hardware requires a mature product in late design. For a complex product, a stage 1 approach requires predictive modeling. Digital design — computer simulations and modeling — is deployed from CMM stage 2. On the lower levels, this is purely performance and environment driven. Can the product perform its intended function in all use cases, without failure, based on nominal inputs and outputs?

Pilot runs, manufacturing investments, and lifetime tests are typically started after design freeze. These entail time and money investments that do not allow for an iterative approach. Stage 2 companies often identify providing computer simulations as design verification before design freeze. Experience shows that design rework is often needed to meet the requirements of the parts’ safe-operating limitations, such as a maximum ambient temperature.

By stage 3, virtual analysis should be highly correlated with failure conditions; for instance, through field data and dedicated reliability tests to provide a high likelihood of detecting failures through virtual analysis before they happen. In design failure mode and effect analysis (DFMEA), a risk priority number (RPN) is assigned to product failures as scores for severity, occurrence, and detection. Increasing the likelihood of detection can lower the RPN by as much as 80%.

In CMM stage 4, typically simulation is used early in the design process. Simulation is used to calculate a nominal performance, and the statistical distribution, that is, failure calculated with more granularity—not as a yes/no binary outcome but as a probability of failure – the statistical capability of the design as expressed in Cpk. In the DFMEA, this again lowers the RPN further by backing up the claim of a low or remote occurrence score. In thermal design, higher CMM companies evolve to use measurements to underpin the fidelity of the simulation model by confirming material properties, thicknesses of bond lines, etc., along the heat-flow path.

Early design models, shown in Figure 3 for an automotive ADAS control unit, simulated before component placement has closed in the EDA design flow, can be used to support cooling solution choice, apply deterministic design improvements, and explore the likely impact of input variables.


Figure 3: Initial design for automotive ADAS unit modeled in Simcenter Flotherm.

The combination of computer simulations and statistical techniques is powerful in addressing both nominal design and statistical design capabilities. In design-of-experiments (DOE), a scenario consisting of a number of specific cases can be calculated as an array of virtual experiments. The cases are selected to enable separating out the effects of inputs and combinations of inputs, which results in the nominal performance output as a quantified function of the design inputs. At the lower CMM levels, this function can be used to choose the design inputs so that the design meets its intended function in all stated conditions.

Becoming a Highly Capable Reliability Company
At higher CMM levels, the V-model also includes knowing the statistical distribution of the inputs and having a requirement on the allowed probability of failure, usually expressed as a Cp/Cpk statistical capability or a sigma level. Again, a DOE can determine the output performance as a function of design inputs and noise factors; subsequently, the effect of noise and the statistical distribution of the input factors can be determined through Monte Carlo simulation. For each design input and each noise factor, a random value is picked from the relevant distribution and substituted in the equation to calculate the performance output. This is repeated a large number of times, So 5,000 times, a set of design inputs and noises is selected and substituted into the function to calculate the performance output. This results in a predicted data set of 5,000 values for the performance output to show the expected statistical distribution, statistical capability and failure rate.


Figure 4: Workflow for combining digital and statistical design.

The workflow for a higher CMM company is shown in Figure 4, with the results of the capability analysis of the 5,000 simulations shown for an improvement to the design in Figure 3. The demonstrated Cpk = 1.05 is far below 1.33 so the expected failure rate far exceeds the acceptable ppm level. Because a low failure rate is sought, the number of Monte Carlo experiments needed is high, illustrated in Figure 5.


Figure 5: Prediction of junction temperature for critical IC7 component for 5,000 simulations, accounting for statistical variation in input parameters using HEEDS.

A Proactive vs. Reactive Approach
Lower level CMM organizations have a reactive approach to high levels of failure in normal use, that is, nominal calculations that affect the failure rate in the flat part of the bathtub curve. Mature organizations simultaneously work in more fields and deploy both nominal and statistical modes of digital design specific to the different parts of the bathtub curve: product infancy, normal use, and wear. Stage 5 CMM organizations also invest in understanding the root causes of failure mechanisms underpinning the random failures in normal life and wear.

Assessment of the package’s thermal structure is used to calibrate detailed 3D thermal simulation model for the highest predictive accuracy during design. The graph in Figure 6 compares the results of running thermal structure functions for a thermal model of an IGBT to testing the actual part using active power cycling.

Comprehensive cycling strategies for different use-case conditions and capture a range of electrical and thermal test data that can be applied to the model, in addition to running regular thermal transient tests. The results can identify damage to the package interconnect or locate the cause of degradation within the part’s thermal structure, thereby meeting the testing requirements of CMM stage 4 and providing the data necessary to achieve Stage 5.

Wendy Luiten is a well-known Thermal Expert and Master Black Belt in Innovation Design for Six Sigma. She authored over 25 papers, holds 6 patents and pending patents and is a well-known lecturer. She received the Semitherm best paper award in 2002, the Harvey Rosten Award for Excellence 2013, and Philips Research Outstanding Achievement award in 2015. After 30-plus years in Philips Research she is now principal of her own consultancy and continues to work as thermal expert, and Master Black belt, as lecturer at the High Tech Institute and as DfSS lead trainer.
Source: https://semiengineering.com/achieving-physical-reliability-of-electronics-with-digital-design/

Continue Reading
Blockchain4 days ago

The Countdown is on: Bitcoin has 3 Days Before It Reaches Apex of Key Formation

Blockchain4 days ago

Litecoin, VeChain, Ethereum Classic Price Analysis: 17 January

Blockchain4 days ago

Is Ethereum Undervalued, or Polkadot Overvalued?

Blockchain5 days ago

Here’s why Bitcoin or altcoins aren’t the best bets

Blockchain4 days ago

Chainlink Futures OI follows asset’s price to hit ATH

PR Newswire5 days ago

The merger of FCA and Groupe PSA has been completed

Blockchain2 days ago

5 Best Bitcoin Alternatives in 2021

Blockchain4 days ago

Bitcoin Worth $140 Billion Lost Says UK Council

Blockchain4 days ago

Data Suggests Whales are Keen on Protecting One Key Bitcoin Support Level

Blockchain4 days ago

Bitcoin Cash Price Analysis: 17 January

Blockchain4 days ago

eToro’s New Bitcoin Account Incentives Are So Good, They Had To Disable Buy Orders

Blockchain2 days ago

Mitsubishi and Tokyo Tech Tap Blockchain for P2P Energy Trading Network

Cyber Security11 hours ago

Critical Cisco SD-WAN Bugs Allow RCE Attacks

Blockchain4 days ago

Cardano, Cosmos, BAT Price Analysis: 17 January

Blockchain4 days ago

Grayscale’s Bitcoin Trust adds over 5k BTC in 24 hours

Blockchain5 days ago

Mt. Gox Creditors Could Get Bankruptcy-tied Bitcoin

Blockchain2 days ago

New Highs Inbound: Ethereum is About to See an Explosive Rally Against BTC

Blockchain4 days ago

Why Bitcoin denominated payments won’t be mainstream anytime soon

Blockchain4 days ago

Was Bitcoin’s rally overextended? If yes, what next

Cyber Security2 days ago

Rob Joyce to Take Over as NSA Cybersecurity Director

Trending