Connect with us

Semiconductor

Navitas GaN IC Drives OPPO’s New Generation of Fast Charging

Avatar

Published

on

News Image

The cooperation with Navitas has perfectly matched the company’s continuous exploration and pursuit of new products, new materials, new processes and new technologies. We are excited to see Navitas’ company vision and excellent technology

Navitas Semiconductor today announced the delivery of its 5 millionth gallium nitride (GaN) power IC based on its GaNFast technology to OPPO, the world’s leading fast-charge phone company. Mr. Yingying (Charles) ZHA, VP and General Manager of Navitas China delivered the 5 millionth IC in the form of an award to Mr. Chang LIU, Dean of the OPPO Research Institute, indicating OPPO’s affirmation of Navitas’ GaNFast technology, and the new material’s enabling of the second revolution in the field of power supply and fast chargers.

OPPO is the pioneer in the fast charging market, from the earliest and popular VOOC flash charging protocol, “five minutes to charge and two hours to talk”. The next generation of SuperVOOC has increased the fast charging power of the mobile phone to an unprecedented 125W and the technological innovations are endless. OPPO’s latest generation of lightweight fast-charge products uses Navitas GaNFast power ICs to overturn the traditional, bulky, slow, silicon-based charger market and shrink up to 12x vs. silicon-based chargers.

Mr. Chang LIU, Dean of OPPO Research Institute, said: “The cooperation with Navitas has perfectly matched the company’s continuous exploration and pursuit of new products, new materials, new processes and new technologies. We are excited to see Navitas’ company vision and excellent technology. We also hope to promote the development of gallium nitride technology through in-depth cooperation and accelerate the commercialization of the third generation of band-gap semiconductors.”

Mr. Yingying ZHA, VP and General Manager of Navitas China, said: “I am very pleased that OPPO, as a top mobile device manufacturer, has adopted fast charger technology based on GaNFast power ICs. Navitas’ GaN Power ICs with monolithic integration of GaN FET, GaN digital and GaN analog circuits can promote the commercialization of a new generation of high-frequency, high-efficiency and very high-density power converters in a faster way. Navitas is very fortunate to provide GaNFast Power ICs for OPPO’s new generation of fast-charging technology, helping OPPO products improve user experience and technological innovation. GaNFast Power IC are utilized in OPPO’s world-beating 50W Mini SuperVOOC fast charger, 110W Mini SuperVOOC fast charger and other products and technology platforms, presenting a brand-new form-factor standard.

About OPPO:

OPPO
launched the first “smiley phone” in 2008, which started the exploration and lead the journey to the ultimate beautiful technology. At present, OPPO allows consumers around the world to enjoy the most beautiful technology with smart terminal products with Find and R series mobile phones as the core, and internet services such as OPPO+.

Corporate Vision: To be a healthier and longer-lasting enterprise.
Corporate mission: Let extraordinary hearts enjoy the most beautiful technology.
Corporate values: duty, user orientation, pursuit of ultimate results orientation

About Navitas:

Navitas Semiconductor
is the world’s first GaN power IC company. The company was founded in 2014 and is headquartered in Ireland, with R&D centers in Shanghai, Hangzhou, Shenzhen, and Los Angeles in the United States. Navitas has a strong and growing team of power semiconductor industry experts with rich experience in materials, devices, applications, systems and marketing, plus a proven record of innovation with over 300 patents among its founders. GaN power ICs monolithically-integrate power, analog and logic circuits to enable faster charging, higher power density and greater energy savings for mobile, consumer, enterprise, eMobility and new energy markets. Over 100 Navitas patents are issued or applied.

Navitas Semiconductor, GaNFast and the Navitas logo are trademarks or registered trademarks of Navitas Semiconductor, Inc. All other brands, product names and marks are or may be trademarks or registered trademarks used to identify products or services of their respective owners.

Share article on social media or email:

Source: https://www.prweb.com/releases/navitas_gan_ic_drives_oppo_s_new_generation_of_fast_charging/prweb17285691.htm

Semiconductor

The Next Big Leap: Energy Optimization

Avatar

Published

on

The relationship between power and energy is technically simple, but its implication on the EDA flow is enormous. There are no tools or flows today that allow you to analyze, implement, and optimize a design for energy consumption, and getting to that point will require a paradigm shift within the semiconductor industry.

The industry talks a lot about power, and power may have become a more important design metric than performance in some markets. Power is important because knowledge about it can be used to correctly size the power distribution network. It also can help predict thermal issues and provide guidance for many types of optimizations.

A lot of times we talk about power because we know how to measure, analyze, and optimize it. But the reality is that what many people really care about is energy, and that presents a lot more challenges.

“Multiple design houses have told us they want to do analysis for energy, not just power,” says Qazi Ahmed, principal product manager for the Calypto group of Mentor, a Siemens Business. “Power has become a first-class metric. In fact, it has just toppled performance as the primary metric for a design goal. But the real goal is to develop IPs that are energy-efficient. In design, energy efficiency may or may not always be equal to low power.”

Power tells you how much energy is being consumed per unit of time. When doing power optimization, attempts are made to remove unnecessary activity, and this is good. But it cannot tell you if the energy spent was useful or if the same task could have been performed using less energy.

“We talk a lot about power, often as a proxy for energy, and occasionally forget the difference,” says James Myers, distinguished engineer at Arm. “The difference, of course, is integrating power over time — but how much time spent doing what?”

Power, energy, and performance are intertwined, often in complex ways. “While power is a key measure of how efficiently a design uses the available energy, overall energy consumption determines whether a design can operate with the desired performance within the thermal constraints,” says Arti Dwivedi, senior manager, product management at Ansys. “Maximizing design performance requires maximizing energy efficiency.”

What is missing in the definition of power is what constitutes a useful task. Once that is defined, it becomes possible to analyze how much energy was consumed performing that task. Now it becomes possible to tell if one architecture or implementation produces the same result more efficiently. How much energy is your system wasting on housekeeping functions? Do you actually reduce total energy by using a smaller, slower processor rather than running the same task on a faster processor? Does that processor extension allow your software to become more energy efficient?

The focus on power permeates through the development process. When you run place-and-route, you are primarily optimizing for performance. But how different would the layout be if you were optimizing for power? And how would it change again if you were optimizing for energy? The difference between optimizing for power and energy means that all tools would need to become task-driven. That requires understanding which tasks are most important to the device, and then using that information to ensure those tasks consume the minimum amount of energy.

This approach requires a deep collaboration with the ecosystem. “This is not trivial,” says Rob Knoth, product management director at Cadence. “The easiest thing that many of us have been doing is attacking the problem indirectly. Rather than identifying units of work, what we’re doing is more pervasively trying to optimize power, because we have those tools today. We do not waste work by optimizing power. At the end of the day, when we do identify those units of work, we’re going to need all these same tools — tools that we built into the flow that we are using to pervasively optimize power.”

This can get very complicated just on the power side. “There are several scaling vectors of interest in assessing and projecting power during the architecture phase,” says Dan Cermak, vice president of architecture and product planning for Ambiq Micro. “There is architectural scaling to account for new architectures and design features such as frequency changes, new hardware functions such as accelerators, power domain partitioning, and potentially voltage changes. There is process scaling to account for new or updated process parameters to determine Ceff (effective capacitance), wire loading effects, VT, voltage shifts, etc. Then there are design-related optimizations to take into account. All of these scaling vectors need to be assessed in the context of representative workloads.”

What is missing is an industry standard way to define the tasks, scenarios, and workloads that are important to a system being designed. The Portable Stimulus Standard (PSS) is an attempt to define that capability. It is a high-level testbench language based on control and data flow through a design. But it is unclear at this point whether the standard is deficient in some way, making it too difficult to perform this role, or if it is just taking time to become accepted within the industry. The goal of PSS was to have a single way to define testbench scenarios that could be used throughout the development flow, because the input description was agnostic about the execution engine the design was to be run on.

Energy vs. power
Energy encompasses both active and leakage power. “Mobile and IoT devices are typically heavily duty cycled, so standby power is important as this will integrate over long standby times,” says Arm’s Myers. “But even in IoT, the active power and compute throughput can be as important. For example, executing TinyML neural networks for voice or image classification. Increased power here will be an energy win if the time to result is reduced by a larger amount, and this is why we are seeing continually increased processing capability in these devices.”

There are other ways to get to extremely low power device operation. “We can design at near-threshold voltages to take advantage of square law power reduction,” adds Myers. “But it’s possible to lower voltage and frequency to such a point that while power is decreased, active energy ends up increasing due to lower leakage over much longer time.” (See figure 1.)

Fig. 1: Power versus energy considerations. Source: Arm

Tradeoffs between energy and power can be non-intuitive even when concentrating on active power. “If you have an SoC with two cores — a high-performance core and a low-performance core — the high-performance core does more work and consumes more power,” says Mentor’s Ahmed. “The low-performance core may have 50% of the throughput compared to the high-performance core, and may consume 30% to 40% less power. In this case, the low-performance core is not as energy-efficient as the high-performance core, and running a task on that core will result in lower power but more total energy.”

The challenge is translating this into a design. “You need a tremendous amount of high-quality data about the system to analyze and drive exploration and implementation,” says Cadence’s Knoth. “If you don’t have that data, you’re going to make very short-sighted decisions, which are potentially erroneous. This is because you may be dealing with a local minima as opposed to a global minima.”

Knowing the relationship between power and energy can help with improvements around a minima. “Power regressions for different workloads with varying utilizations are being adopted in power methodologies to identify power bugs, which lead to redundant energy consumption,” says Ansys’ Dwivedi. “Yadong Wong from Qualcomm shared their methodology of using differential energy analysis with the same test, but different workloads to measure change in energy consumption and identify design inefficiencies. An increase in energy consumption of the design with the same test, but lower utilization, indicates redundant switching of data and clocks when no useful work is being done.”

Energy drivers
There are certain markets that will drive this. “They’re the ones who are going to invest in it,” says Knoth. “When we started originally talking about power, as opposed to just frequency, the cellphone chips were driving that and the people building data center servers didn’t care because they were plugged into a wall. They didn’t have that little battery to constrain them. But now, the data center is worried about the amount of cooling they need. And if they can optimize the power efficiency on one of the chips, when they multiply that by the thousands, it’s going to have a material impact on their operating costs.”

One common component between markets are the processor cores. “The focus on energy is primarily being driven by IP vendors,” says Ahmed. “There are CPUs and GPUs. There are people working on machine learning and AI accelerators, and network companies — anybody who has a large design operating with different types of modes and who wants to get low power, energy efficiency, or because they need to meet environmental requirements.”

A key driver is the ability to set metrics for a processor. “It could be looking at instructions and how much work is being done per watt,” Ahmed explains. “You could concentrate on different operations like arithmetic operations, and you can actually look at the utilization and the amount of power they consume. So people can plot something like energy linearity checks, which basically means how much energy is being consumed for a given performance or utilization. For 100% utilization, a certain amount of energy might be consumed. If you reduce the operations, CPU performance may be reduced to 50%. Is the energy still 50% or 60%? There could be different ways to do that.”

Defining tasks, scenarios and workloads
One of the difficulties is that modern SoCs rarely perform one task at a time. When multiple tasks are operating on a device, they interact with each other. The question then becomes how can you define the energy being consumed by a specific task. How much additional energy is being consumed by its interactions with other tasks? Without this knowledge, it is difficult to know if running them in parallel is the right choice or if they should be run serially, assuming no other constraints.

“The same is true for scaling components of our systems,” says Myers. “Larger systems may create performance and energy bottlenecks in other components. Assumptions can be verified with existing power analysis tools toward the end of the design flow, but earlier insight would be very beneficial.”

Use cases matter, too. “It is likely that people would start measuring power consumed by each task under ideal conditions,” says Ahmed. “Then they may have different scenarios where somebody is playing a game while watching a video, and at the same time in the background some other app is running, as well. Or maybe the device is doing two or three different things, so the combined scenario needs to be there. There has to be a way to run a large number of workloads, and then make decisions for powers.”

The scenarios have to be long enough, such that any heat created by running the scenario can be taken into account. For example, while a game may start out consuming a certain amount of energy per minute of play time, it may increase as the device heats up, causing additional energy to be consumed.

Representative workloads are important. “Assuming the workloads are known — which is a huge assumption since this is typically one of the most difficult aspects of power analysis — the next challenge is how to effectively predict/model these scaling vectors to estimate power for a given workload,” says Ambiq’s Cermak. “Probably the easiest method, or at least the most accessible, is using a spreadsheet model or similar. These models tend to be extremely complicated and unwieldy. Yet, when properly managed, they can be very effective.”

There are a lot of moving pieces to understand, though. “This is all complicated by the time and energy to transition between operating modes, whether standby to active and back, or between DVFS operating points,” says Myers. “Consider the path from a triggering event, through system control processor, to voltage regulator output changes, through power gate controls, following any macro-specific control sequencing, releasing clocks and resets, and then we’re ready to go. How long does this take, and how much energy is consumed? How often do we want to make such changes? This is not covered in standard benchmarks that focus on active power and avoid device-specific power management, though ULPMark Core Profile is a notable exception in the IoT domain.”

It all comes back to defining representative workloads. “You’re looking at how to effectively use functional verification to drive implementation and optimization,” says Knoth. “If we’re talking about climbing the pyramid, where the top is energy, we’re getting pretty close. When we’re talking about units of work, we have to be talking about the functionality of the system. We have to be talking about what the widget is doing. And so there’s a broad recognition that there needs to be a pervasive use of functional verification in concert with the design realization.”

Tool requirements
While still somewhat academic, tool vendors are attempting to address the issue of energy. “For each use case, they need an energy number, as well as the power numbers,” says Ahmed. “Then they can do an overlay and try to extract information through data analysis. What people want to see is detailed reporting with powerful visualizations so that what they see at the end is meaningful. There’s a need to have some standard intelligence built into the tools for that.” (See figure 2.)

Fig. 2: Building energy intelligence into tool flow. Source: Mentor, A Siemens Business

Cadence is approaching the problem with three steps, according to Knoth. “The first is understanding, the second is exploration, and the third is implementation. Understanding is critical before you start doing any work. It’s critical that the whole ecosystem takes a step back and says, ‘For this thing that I’m building, I need to understand its function. What are the workloads?’ Then we can start to explore with things like high-level synthesis, or early prototype RTL synthesis, RTL power estimation, etc. You spend a lot of time in the exploration stage, trying different architectures, trying different data flows, trying different components that go into the product. Then you get to implementation, where we continue using the same engines that were used in the exploration phase. We’re using the same stimulus that enabled us to understand the design. We use that stimulus to drive all of the synthesis, and place-and-route. We’re choosing the right architecture and micro-architectures, we’re optimizing the clock network, etc.”

The quantity of analysis involved is much higher than in the past. “You might have a design that has 1,000 different use scenarios, and some might be more important, some less,” says Ahmed. “We need to get the power numbers and the energy metrics for all of them, and somehow have the ability to generate an average for all of those scenarios. Then you need to feed that back, in a meaningful way, to the RTL designer to help them focus on optimizing for power that will result in attaining energy efficiency.”

The back-end tools have to change, as well. “Most tools are currently built for performance optimization,” adds Ahmed. “Place-and-route has to be driven from an energy efficiency point of view rather than performance. None of the downstream physical tools have the capability to do any routing or placement from the perspective of power or energy. That still needs to be built in. It will require new kinds of technologies, new methods, and new kinds of integration with upstream tools.”

That integration with the upstream tools is important. “During the design phase, physical design specific detail is unknown,” says Cermak. “Clock trees do not exist, wire loading is unknown, and intrinsic effects of gate delays/propagation are unclear. However, there needs to be some way to effectively project power to feed back any issues that may require architectural changes and additional design optimizations. Generally speaking, these tools are wildly inaccurate in predicting physical design effects, and either end up radically pessimistic or optimistic, depending on the design’s complexity.”

Conclusion
While power optimization has been an important step forward for the industry, it is not the top of the pyramid. The industry has started to assess how it gets to being energy-aware, but that is not going to be an easy change to make. We have started to look at power from a task, scenario, and workload perspective, but the industry has to agree on the ways that this is going to be accomplished. If it is not going to use PSS, it needs to quickly work on an alternative. This is a gating function.

The industry then must make a concerted effort throughout the development flow, because without all stages of the flow being made energy-aware, accuracy will suffer. That means the industry will be slow to adopt it. Accuracy has held back power optimization for quite some time, and users in general still find large gaps between what was predicted and what turned out to be true in silicon. Maybe a focus on energy will lead to a greater understanding and more predictability.

Source: https://semiengineering.com/the-next-big-leap-energy-optimization/

Continue Reading

Semiconductor

The Next Phase Of Computing

Avatar

Published

on

Apple’s new M1 chip offers a glimpse of what’s ahead, and not just from Apple. Being able to get 18 to 20 hours of battery life from a laptop computer moves the ball much farther down the field in semiconductor design.

All of this is entirely dependent on the applications, of course. But what’s important here is how much battery life and performance can be gained by designing hardware specificially in conjunction with the software, rather than each being designed separately based upon some general-purpose connection scheme, such as a general-purpose chip, running a general-purpose OS, using general-purpose APIs.

The fact that the M1 chip is based on a 5nm process is good marketing, but that by itself does little for the overall device performance or energy efficiency. Just having more transistors packed on a die doesn’t mean much without incredibly fast interconnects between the ultra-dense processing elements and memories, or without an underlying power delivery network capable of getting enough power to all of those processing elements at the same time.

That Apple started out on the low end of its product line with the most expensive process technology is an indication it wants to fine-tune the system in the field for various applications before turning up the heat — literally and figuratively — on performance. In most cases, the most advanced technology goes into the highest-priced, highest-performing device, whether that’s a computer or a car, because the developer wants to recoup its investment as quickly as possible.

While the M1 chip includes a CPU, GPU and NPU, the interesting part will be what happens with customized acceleration for applications such as image and video processing. Apple develops all of this internally, so it has the ability to fine-tune just about everything.

But the company is hardly alone here. In the future, performance and power specs will become much harder to decipher because they will be tied increasingly to specific use cases. There are plenty of such use cases, and so far there are no clear leaders in the markets they will serve, in part because these markets are so new and in part because there has never been an option for this level of customization. The possibilities and number of options is growing exponentially.

Intel, AMD and Samsung all are heading in this direction. So is Huawei, based on chips from HiSilicon. Devices that do some level of computing — and that list is expanding, with rapidly blurring distinctions about what’s a computer and what isn’t due to the emphasis on smart everything — will need to fit into an acceptable power envelope. In the future, that also will include an energy envelope, slimming down processing to only what is required to run at a particular clock frequency, and doing that as efficiently as possible.

Future generations of devices will maximize throughput and access to memory, while optimizing compute cycles for the task at hand. In the future, much of this will be done dynamically as loads and algorithms shift, and as new IP is developed to take on some of these programming challenges.

Put in perspective, this represents a fundamental shift in design across a wide range of applications, which is why the entire tech industry is scrambling for more talent these days. The Apple M1 is a high profile example, but there is much more to come.

Ed Sperling

Ed Sperling

  (all posts)
Ed Sperling is the editor in chief of Semiconductor Engineering.

Source: https://semiengineering.com/the-next-phase-of-computing/

Continue Reading

Semiconductor

Week In Review: Auto, Security, Pervasive Computing

Avatar

Published

on

Automotive
Cadence achieved ASIL Level B in support of D (ASIL B(D))-compliant certification for its Tensilica ConnX B10 and ConnX B20 DSPs, which are designed for automotive radar, lidar, and vehicle-to-everything (V2X). SGS-TÜV Saar certified that the DSPs have support for random hardware faults and systematic faults.

Synopsys is acquiring Moortec, whose process, voltage, and temperature (PVT) sensors are used in-chip to monitor the health of chips during design, manufacture, test, and in system. Synopsys is adding the PVT sensors to its Silicon Lifecycle Management (SLM) platform to provide environmental data of a chip’s health in real time. This data will feed an analytics engine that can optimize operational activities that can improve yield, test, as well as safety, security and predictive maintenance capabilities. Synopsys did not disclose any financial details about the acquisition.

Imagination launched a new neural network accelerator (NNA) for advanced driver-assistance systems (ADAS) and autonomous driving. Called IMG Series4, the AI accelerator is a multicore architecture running at 600 TOPS (tera operations per second), at 12.5 TOPS per core in less than one watt, says Imagination in a press release. Tensor Tiling splits input data tensors into multiple tiles as a way to process data efficiently. The IP safety features and design process conforms to ISO 26262. Series4 will be available in December 2020.

Radsys used National Instruments’ Vehicle Radar Test System (VRTS) to help the Tsinghua University Suzhou Automobile Research Institute create China’s standard for testing vehicle millimeter wave (mmWave) automotive radar.

ON Semiconductor introduced a single point direct time-of-flight (dToF) lidar that uses its Silicon Photomultiplier (SiPM) sensor. The sensor overcomes some of the issues that lidar has ambient solar light and slow response time and is suitable for industrial proximity sensing.

Pervasive computing — Data centers, cloud, 5G, edge
Amazon will use its own Inferentia chip in its Alexa voice assistant, moving away from Nvidia chips, reports Reuters. Rekognition, Amazon’s face recognition service, will also start using Inferentia chips. Both services use the cloud — they access a data center to complete the transaction.

Synopsys says its Verification IP (VIP) for Compute Express Link (CXL) 2.0 is now available. CXL is an open standard interconnect technology for high-speed communications between CPUs and other chips that are used as accelerators. CXL is designed to improve data center performance. “The advancement of CXL as an open standard interconnect technology to accelerate next generation data center performance is our singular focus,” said Jim Pappas, chairman at CXL Consortium, in a press release. The IP is part of Synopsys’ cache coherency verification IP portfolio.

Graphcore designed its AI chip — the Colossus GC200 Intelligence Processing Unit (IPU) processor which is part of an AI platform — using Mentor, a Siemens business’ verification and testing tools and IP. The IPU has 59.4 billion transistors on a single 823sqmm die and is manufactured on TSMC’s 7nm process. Mentor was involved in circuit verification, PCB design, protocol verification, thermal analysis, design-for-test (DFT), and bring-up of the AI processor, according to a press release.

Xilinx and Samsung Electronics announced Samsung SmartSSD computational storage drive (CSD), which has Xilinx Kintex UltraScale+ FPGA accelerator with one million system logic cells and almost 2,000 DSP (digital signal processing) slices for hardware acceleration, according to a press release. The companies say this is the first adaptable computational storage platform for data centers.

Intel debuted a discrete GPU for data centers called Server GPU, based on X-LP microarchitecture for cloud gaming and media experiences.

Company milestones and wins
Brewer Science, which usually deals with advanced chemistry for semiconductor industry manufacturing, stepped up to make hand sanitizer for the local community to keep people safe during the COVID-19 pandemic.

Videos of the week

Job, Event and Webinar Boards: Find industry jobs and upcoming conferences and webinars all in one place on Semiconductor Engineering. Knowledge Center: Boost your semiconductor industry knowledge. Videos: See the latest Semiconductor Engineering videos.

Susan Rambo

  (all posts)
Susan Rambo is the managing editor of Semiconductor Engineering.

Source: https://semiengineering.com/week-in-review-auto-security-pervasive-computing-41/

Continue Reading

Semiconductor

Electronics For Quantum Communications

Avatar

Published

on

Moving from classic encryption algorithms with increasing key lengths to communication based on entangled quanta.

popularity

Our secure digital communications so far have functioned on the principle of key-based encryption. This involves generating a key of appropriate length, which is then used to encrypt the data. Because distributing the keys is difficult, the keys are reused rather than regularly generating new ones.

The regular use of the keys opens up the encryption process to attacks by mathematical methods. Protection against such attacks currently is afforded by appropriate key lengths, since the compute time required by the mathematical methods for key recovery increases exponentially with the key length. This means the key lengths must be adapted already today to the growing potential of computing technology.

However, the greatest danger of the keys used in current encryption methods being recovered comes from the use of quantum computers. Because developments in this area are proceeding rapidly, quantum computers that are capable of recovering current and future key lengths in fractions of a second soon could be available. This is possible because with quantum computers, key recovery time scales linearly with the key length rather than exponentially. Classic encryption algorithms would then no longer be secure, because lengthening the key would not offer additional security.

In anticipation of this situation, research has been under way for a number of years in the area of quantum communications. The focus here is on secure communication by means of entangled quanta (in the form of photons). This requires generating entangled quanta and sending one to the recipient while the other remains with the sender. The entangled quanta have special properties that are identical for both quanta. If a quantum is intercepted on the way to the recipient, and then fed back into the stream after manipulation, it loses the typical properties of the encrypted pair. Upon arrival at the recipient, the manipulation can be discovered by comparison with the quantum held by the sender.

The system designs for quantum communication are complex electrical-optical systems. A complex optical setup with (semi-) transparent mirrors is required to generate entangled photons. Various electronic components are also required to control the photon source that must frequently operate at extremely short time scales.

The photons are often detected using single photon detectors. The achievable energy levels are very low, and electronic components are required for analyzing such low energy levels. Furthermore, the analysis electronics must operate with extreme speed – analysis rates in the GHz range are often required.

High-precision instruments are also required for measuring the arrival time of the voltage pulse from the single photon detector. Various mathematical methods are needed to recover the individual photon states in order to ensure that the received photon retains the same state as its counterpart held by the sender. Complex signal processors are used here, which are frequently designed as a combination of an FPGA and a DSP.

The required electronics are currently built from individual components. If quantum communication is to become standard, however, the electronic components must be implemented in just a few circuits. Work is currently beginning on the first subcomponents, such as fast analog-digital converters (ADCs) together with the digital analysis electronics consisting of FPGA and DSP.

Andy Heinig

Andy Heinig

  (all posts)
Andy Heinig is general manager for system integration at at Fraunhofer Institute of Integrated Circuits, Division of Engineering and Adaptive Systems.

Source: https://semiengineering.com/electronics-for-quantum-communications/

Continue Reading
Cleantech1 hour ago

Four Corners EV Charging: Utah & Colorado Are Leaving NM & Arizona Behind

Cleantech2 hours ago

Aptera Announces First “Never Charge” Electric Vehicle

Cleantech2 hours ago

Gayam Motor Works & Sokowatch Launch East Africa’s First Commercial Electric Tuk-Tuks

Cleantech3 hours ago

The German Constitution May Protect A Right To Human Driving

Cleantech3 hours ago

2021 Toyota RAV4 Prime Fails Moose Avoidance Test

SaaS4 hours ago

Top 10 SaaStr Videos of the Week: MongoDB, Splash, Slack + Yammer, Gainsight and More!

Cleantech4 hours ago

Supercell Technology From Cadenza Is Centerpiece Of New York Energy Storage Project — CleanTechnica Exclusive (Video)

SaaS4 hours ago

How to Create PPC Campaigns for Real Estate Marketing

Cleantech5 hours ago

Cleantech ETFs Vastly Outperform Dow Jones, Oil & Gas In 2020

Cleantech5 hours ago

ICE Racing Can Still Teach Us Things

Cleantech5 hours ago

California’s Low Carbon Fuel Standard Accelerating Transportation Electrification

SaaS6 hours ago

7 Warning Signs You Have Product Flop on Your Hands (and How to Fix It!)

Ecommerce6 hours ago

Amazon Marketing Consulting

Globe NewsWire7 hours ago

McPhy Energy : Déclaration du nombre total des droits de vote et du nombre d’actions au 30 novembre 2020

Cleantech7 hours ago

Breaking News! Oakland & Seattle Ban Natural Gas as Cities Continue to Lead on Electrification

SaaS7 hours ago

Did You Ship At Least 3 Game-Changing Features This Year?

SaaS9 hours ago

When Should You Use Microsites

Globe NewsWire9 hours ago

Dassault Aviation : Roll-out du Falcon 6X

Globe NewsWire9 hours ago

Dassault Aviation: Falcon 6X rollout

Energy11 hours ago

Crescent Point Announces 2021 Budget

Energy11 hours ago

PHNIX dévoile une nouvelle pompe à chaleur pour le chauffage, le refroidissement et l’eau chaude des habitations, destinée au marché européen

Energy11 hours ago

PHNIX stellt neue Wärmepumpe für Hausheizung, Kühlung und Warmwasser für den europäischen Zielmarkt vor

Esports11 hours ago

NiP announce “Path of a Ninja” talent program

Energy11 hours ago

Wilbur-Ellis Agribusiness Acquires Probe Schedule

Energy11 hours ago

High Purity Alumina Market Estimated to Expand at a CAGR of 12.5% over the Forecast Period of 2020 to 2030 – Persistence Market Research

Energy14 hours ago

Gunvor inaugura una nueva línea de crédito de 540 millones de dólares dedicada al biodiésel

Energy14 hours ago

Gunvor Launches New US $540 Million Biodiesel Borrowing Base

Techcrunch15 hours ago

China’s internet regulator takes aim at forced data collection

Blockchain16 hours ago

Amazing Blocks joins startup and innovation hub TechQuartier

Blockchain16 hours ago

Amazing Blocks attended the European Blockchain Convention

Trending