Connect with us

Semiconductor

Aixtron’s revenue rebounds by 37% in Q2

Avatar

Published

on

29 July 2020

For first-half 2020 deposition equipment maker Aixtron SE of Herzogenrath, near Aachen, Germany has reported revenue (including spare parts and service) of €97m, down 27% on €132m a year ago as expected, remaining on track despite the COVID-19 pandemic as operations continued running without interruption due to early counter-measures and a stable supply chain. In fact, second-quarter 2020 revenue was €56m, down 11.5% on €63.3m a year ago but up 37% on €41m in Q1. The main drivers of demand are the growing markets for gallium nitride (GaN) and silicon carbide (SiC) power electronics, lasers for ultra-fast optical data transmission, and specialty LEDs for display and disinfection applications.

Equipment revenue in particular (excluding spare parts and service) has fallen by 28% from €106.5m (81% of total revenue) in first-half 2019 to €76.4m (79% of revenue) in first-half 2020. However, although still down on €50.3m (79% of revenue) in Q2/2019, quarterly revenue has rebounded by 56% from €29.9m (73% of revenue) in Q1/2020 to €46.5m (83% of revenue) in Q2.

On a regional basis, 76% of first-half 2020 revenue came from Asia, 13% from the Americas and 11% from Europe.

Despite regional coronavirus-related lockdowns (first in China and later in Europe and the USA) which led to the postponement of delivery and commissioning of a few systems at the request of customers, Aixtron has continued to show strong profitability and return on investment.

Gross margin fell only slightly from first-half 2019’s 40% to 39% in first-half 2020, as the dip in Q1 to 36% (following delayed final acceptances of metal-organic chemical vapor deposition systems, due mainly to pandemic-related travel restrictions) was compensated by a rebound to 41% in Q2/2020 (level with Q2/2019), aided an improved higher-margin product mix.

The significant increase in revenue and margins between April and June resulted in a significantly improved operating result (earnings before interest and taxes) from -€1.1m in Q1 to €3.3m in Q2. Overall, first-half 2020 EBIT of €2.2m (EBIT margin of 2% of revenue) compared with first-half 2019’s €19.1m (margin of 14%).

R&D spending was €28.6m (30% of revenue) in first-half 2020, up 13% from first-half 2019’s €25.3m (just 19% of revenue). R&D for leading-edge technologies is focused on the development and improvement programs for next-generation MOCVD systems – for all application markets – and the organic light-emitting diode (OLED) qualification project, where Aixtron has achieved some critical specifications and is working intensively on achieving further specs. In parallel, the firm is commencing discussions with the customer on the next steps in the joint OLED program.

As a result of the lower revenue and margin in first-half 2020, net profit was just €2.5m, down from €15.8m in first-half 2019. However, the quarterly net result recovered from -€0.8m in Q1 to €3.3m in Q2/2020.

However, due to Aixtron’s further build-up of inventories by €12.2m in first-half 2020 (from €79m to €85m during Q1 then €91.2m during Q2) in preparation for increasing shipments in second-half 2020, operating cash flow was -€7.9m in Q2 and hence -€3.2m in first-half 2020 (compared with +€1.8m in first-half 2019). Capital expenditure (CapEx) was €3.4m in Q2 and hence €5.2m in first-half 2020 (cut from €6.6m in first-half 2019). Free cash flow in first-half 2020 has therefore worsened from -€4.8m in first-half 2019 to -€8.4m in first-half 2020 (with -€11.3m in Q2 outweighing +€3m in Q1).

Cash and cash equivalents including short-term financial investments (bank deposits with a maturity of at least three months) hence fell during Q2, from €300.8m to €288.6m.

Total orders (including spares & services) have risen further, from €68.8m in Q1 to €69.6m in Q2/2020 (up 56% on €44.7m in Q2/2019), taking first-half orders to €138.4m (up 41% on €98.3m a year ago) driven by continued strong demand from the power electronics, optical data communications and LED sectors.

Consequently, Aixtron enters second-half 2020 with strong order backlog (equipment only) of €156.6m, up 7% on €146.3m at the end of Q1/2020 and up 42% on €110.1m at the end of first-half 2019.

Based on (1) the solid order backlog, (2) the currently estimated low impact of the COVID-19 pandemic and (3) the budget exchange rate of $1.20/€, Aixtron expects order intake for full-year 2020 to grow to €260-300m (up from €231.9m in 2019).

Based on equipment order backlog (convertible into 2020 revenue) of €130m at the end of first-half 2020, joined by €11-51m of expected order intake shippable during 2020 plus an estimated €22m of spares & services revenue, for full-year 2020 Aixtron still expects revenue of €260-300m, with gross margin of about 40% and EBIT margin of 10-15% of revenue.

“In the second half of the year, our business should grow much more dynamically again,” comments president Dr Bernd Schulte. “We expect revenues to grow strongly in the third quarter and then again in the final quarter,” he adds.

“The renewal of our product portfolio is making good progress,” believes president Dr Felix Grawert. “With our new products we will be able to better support our customers in their growth in future markets such as 5G mobile network expansion and e-mobility”.

See related items:

Aixtron changes composition of Executive Board

Aixtron’s Q1 revenue falls 40% year-on-year to €41m

Aixtron meets 2019 guidance for order intake, sales, gross margin and EBIT margin, aided by strong Q4

Aixtron year-to-date revenue grows despite export license delays hitting Q3

Aixtron returns to positive free cash flow in Q2/2019 after 19.5% year-on-year equipment revenue growth

Tags: Aixtron MOCVD

Visit: www.aixtron.com

Source: http://www.semiconductor-today.com/news_items/2020/jul/aixtron-290720.shtml

Semiconductor

The Next Big Leap: Energy Optimization

Avatar

Published

on

The relationship between power and energy is technically simple, but its implication on the EDA flow is enormous. There are no tools or flows today that allow you to analyze, implement, and optimize a design for energy consumption, and getting to that point will require a paradigm shift within the semiconductor industry.

The industry talks a lot about power, and power may have become a more important design metric than performance in some markets. Power is important because knowledge about it can be used to correctly size the power distribution network. It also can help predict thermal issues and provide guidance for many types of optimizations.

A lot of times we talk about power because we know how to measure, analyze, and optimize it. But the reality is that what many people really care about is energy, and that presents a lot more challenges.

“Multiple design houses have told us they want to do analysis for energy, not just power,” says Qazi Ahmed, principal product manager for the Calypto group of Mentor, a Siemens Business. “Power has become a first-class metric. In fact, it has just toppled performance as the primary metric for a design goal. But the real goal is to develop IPs that are energy-efficient. In design, energy efficiency may or may not always be equal to low power.”

Power tells you how much energy is being consumed per unit of time. When doing power optimization, attempts are made to remove unnecessary activity, and this is good. But it cannot tell you if the energy spent was useful or if the same task could have been performed using less energy.

“We talk a lot about power, often as a proxy for energy, and occasionally forget the difference,” says James Myers, distinguished engineer at Arm. “The difference, of course, is integrating power over time — but how much time spent doing what?”

Power, energy, and performance are intertwined, often in complex ways. “While power is a key measure of how efficiently a design uses the available energy, overall energy consumption determines whether a design can operate with the desired performance within the thermal constraints,” says Arti Dwivedi, senior manager, product management at Ansys. “Maximizing design performance requires maximizing energy efficiency.”

What is missing in the definition of power is what constitutes a useful task. Once that is defined, it becomes possible to analyze how much energy was consumed performing that task. Now it becomes possible to tell if one architecture or implementation produces the same result more efficiently. How much energy is your system wasting on housekeeping functions? Do you actually reduce total energy by using a smaller, slower processor rather than running the same task on a faster processor? Does that processor extension allow your software to become more energy efficient?

The focus on power permeates through the development process. When you run place-and-route, you are primarily optimizing for performance. But how different would the layout be if you were optimizing for power? And how would it change again if you were optimizing for energy? The difference between optimizing for power and energy means that all tools would need to become task-driven. That requires understanding which tasks are most important to the device, and then using that information to ensure those tasks consume the minimum amount of energy.

This approach requires a deep collaboration with the ecosystem. “This is not trivial,” says Rob Knoth, product management director at Cadence. “The easiest thing that many of us have been doing is attacking the problem indirectly. Rather than identifying units of work, what we’re doing is more pervasively trying to optimize power, because we have those tools today. We do not waste work by optimizing power. At the end of the day, when we do identify those units of work, we’re going to need all these same tools — tools that we built into the flow that we are using to pervasively optimize power.”

This can get very complicated just on the power side. “There are several scaling vectors of interest in assessing and projecting power during the architecture phase,” says Dan Cermak, vice president of architecture and product planning for Ambiq Micro. “There is architectural scaling to account for new architectures and design features such as frequency changes, new hardware functions such as accelerators, power domain partitioning, and potentially voltage changes. There is process scaling to account for new or updated process parameters to determine Ceff (effective capacitance), wire loading effects, VT, voltage shifts, etc. Then there are design-related optimizations to take into account. All of these scaling vectors need to be assessed in the context of representative workloads.”

What is missing is an industry standard way to define the tasks, scenarios, and workloads that are important to a system being designed. The Portable Stimulus Standard (PSS) is an attempt to define that capability. It is a high-level testbench language based on control and data flow through a design. But it is unclear at this point whether the standard is deficient in some way, making it too difficult to perform this role, or if it is just taking time to become accepted within the industry. The goal of PSS was to have a single way to define testbench scenarios that could be used throughout the development flow, because the input description was agnostic about the execution engine the design was to be run on.

Energy vs. power
Energy encompasses both active and leakage power. “Mobile and IoT devices are typically heavily duty cycled, so standby power is important as this will integrate over long standby times,” says Arm’s Myers. “But even in IoT, the active power and compute throughput can be as important. For example, executing TinyML neural networks for voice or image classification. Increased power here will be an energy win if the time to result is reduced by a larger amount, and this is why we are seeing continually increased processing capability in these devices.”

There are other ways to get to extremely low power device operation. “We can design at near-threshold voltages to take advantage of square law power reduction,” adds Myers. “But it’s possible to lower voltage and frequency to such a point that while power is decreased, active energy ends up increasing due to lower leakage over much longer time.” (See figure 1.)

Fig. 1: Power versus energy considerations. Source: Arm

Tradeoffs between energy and power can be non-intuitive even when concentrating on active power. “If you have an SoC with two cores — a high-performance core and a low-performance core — the high-performance core does more work and consumes more power,” says Mentor’s Ahmed. “The low-performance core may have 50% of the throughput compared to the high-performance core, and may consume 30% to 40% less power. In this case, the low-performance core is not as energy-efficient as the high-performance core, and running a task on that core will result in lower power but more total energy.”

The challenge is translating this into a design. “You need a tremendous amount of high-quality data about the system to analyze and drive exploration and implementation,” says Cadence’s Knoth. “If you don’t have that data, you’re going to make very short-sighted decisions, which are potentially erroneous. This is because you may be dealing with a local minima as opposed to a global minima.”

Knowing the relationship between power and energy can help with improvements around a minima. “Power regressions for different workloads with varying utilizations are being adopted in power methodologies to identify power bugs, which lead to redundant energy consumption,” says Ansys’ Dwivedi. “Yadong Wong from Qualcomm shared their methodology of using differential energy analysis with the same test, but different workloads to measure change in energy consumption and identify design inefficiencies. An increase in energy consumption of the design with the same test, but lower utilization, indicates redundant switching of data and clocks when no useful work is being done.”

Energy drivers
There are certain markets that will drive this. “They’re the ones who are going to invest in it,” says Knoth. “When we started originally talking about power, as opposed to just frequency, the cellphone chips were driving that and the people building data center servers didn’t care because they were plugged into a wall. They didn’t have that little battery to constrain them. But now, the data center is worried about the amount of cooling they need. And if they can optimize the power efficiency on one of the chips, when they multiply that by the thousands, it’s going to have a material impact on their operating costs.”

One common component between markets are the processor cores. “The focus on energy is primarily being driven by IP vendors,” says Ahmed. “There are CPUs and GPUs. There are people working on machine learning and AI accelerators, and network companies — anybody who has a large design operating with different types of modes and who wants to get low power, energy efficiency, or because they need to meet environmental requirements.”

A key driver is the ability to set metrics for a processor. “It could be looking at instructions and how much work is being done per watt,” Ahmed explains. “You could concentrate on different operations like arithmetic operations, and you can actually look at the utilization and the amount of power they consume. So people can plot something like energy linearity checks, which basically means how much energy is being consumed for a given performance or utilization. For 100% utilization, a certain amount of energy might be consumed. If you reduce the operations, CPU performance may be reduced to 50%. Is the energy still 50% or 60%? There could be different ways to do that.”

Defining tasks, scenarios and workloads
One of the difficulties is that modern SoCs rarely perform one task at a time. When multiple tasks are operating on a device, they interact with each other. The question then becomes how can you define the energy being consumed by a specific task. How much additional energy is being consumed by its interactions with other tasks? Without this knowledge, it is difficult to know if running them in parallel is the right choice or if they should be run serially, assuming no other constraints.

“The same is true for scaling components of our systems,” says Myers. “Larger systems may create performance and energy bottlenecks in other components. Assumptions can be verified with existing power analysis tools toward the end of the design flow, but earlier insight would be very beneficial.”

Use cases matter, too. “It is likely that people would start measuring power consumed by each task under ideal conditions,” says Ahmed. “Then they may have different scenarios where somebody is playing a game while watching a video, and at the same time in the background some other app is running, as well. Or maybe the device is doing two or three different things, so the combined scenario needs to be there. There has to be a way to run a large number of workloads, and then make decisions for powers.”

The scenarios have to be long enough, such that any heat created by running the scenario can be taken into account. For example, while a game may start out consuming a certain amount of energy per minute of play time, it may increase as the device heats up, causing additional energy to be consumed.

Representative workloads are important. “Assuming the workloads are known — which is a huge assumption since this is typically one of the most difficult aspects of power analysis — the next challenge is how to effectively predict/model these scaling vectors to estimate power for a given workload,” says Ambiq’s Cermak. “Probably the easiest method, or at least the most accessible, is using a spreadsheet model or similar. These models tend to be extremely complicated and unwieldy. Yet, when properly managed, they can be very effective.”

There are a lot of moving pieces to understand, though. “This is all complicated by the time and energy to transition between operating modes, whether standby to active and back, or between DVFS operating points,” says Myers. “Consider the path from a triggering event, through system control processor, to voltage regulator output changes, through power gate controls, following any macro-specific control sequencing, releasing clocks and resets, and then we’re ready to go. How long does this take, and how much energy is consumed? How often do we want to make such changes? This is not covered in standard benchmarks that focus on active power and avoid device-specific power management, though ULPMark Core Profile is a notable exception in the IoT domain.”

It all comes back to defining representative workloads. “You’re looking at how to effectively use functional verification to drive implementation and optimization,” says Knoth. “If we’re talking about climbing the pyramid, where the top is energy, we’re getting pretty close. When we’re talking about units of work, we have to be talking about the functionality of the system. We have to be talking about what the widget is doing. And so there’s a broad recognition that there needs to be a pervasive use of functional verification in concert with the design realization.”

Tool requirements
While still somewhat academic, tool vendors are attempting to address the issue of energy. “For each use case, they need an energy number, as well as the power numbers,” says Ahmed. “Then they can do an overlay and try to extract information through data analysis. What people want to see is detailed reporting with powerful visualizations so that what they see at the end is meaningful. There’s a need to have some standard intelligence built into the tools for that.” (See figure 2.)

Fig. 2: Building energy intelligence into tool flow. Source: Mentor, A Siemens Business

Cadence is approaching the problem with three steps, according to Knoth. “The first is understanding, the second is exploration, and the third is implementation. Understanding is critical before you start doing any work. It’s critical that the whole ecosystem takes a step back and says, ‘For this thing that I’m building, I need to understand its function. What are the workloads?’ Then we can start to explore with things like high-level synthesis, or early prototype RTL synthesis, RTL power estimation, etc. You spend a lot of time in the exploration stage, trying different architectures, trying different data flows, trying different components that go into the product. Then you get to implementation, where we continue using the same engines that were used in the exploration phase. We’re using the same stimulus that enabled us to understand the design. We use that stimulus to drive all of the synthesis, and place-and-route. We’re choosing the right architecture and micro-architectures, we’re optimizing the clock network, etc.”

The quantity of analysis involved is much higher than in the past. “You might have a design that has 1,000 different use scenarios, and some might be more important, some less,” says Ahmed. “We need to get the power numbers and the energy metrics for all of them, and somehow have the ability to generate an average for all of those scenarios. Then you need to feed that back, in a meaningful way, to the RTL designer to help them focus on optimizing for power that will result in attaining energy efficiency.”

The back-end tools have to change, as well. “Most tools are currently built for performance optimization,” adds Ahmed. “Place-and-route has to be driven from an energy efficiency point of view rather than performance. None of the downstream physical tools have the capability to do any routing or placement from the perspective of power or energy. That still needs to be built in. It will require new kinds of technologies, new methods, and new kinds of integration with upstream tools.”

That integration with the upstream tools is important. “During the design phase, physical design specific detail is unknown,” says Cermak. “Clock trees do not exist, wire loading is unknown, and intrinsic effects of gate delays/propagation are unclear. However, there needs to be some way to effectively project power to feed back any issues that may require architectural changes and additional design optimizations. Generally speaking, these tools are wildly inaccurate in predicting physical design effects, and either end up radically pessimistic or optimistic, depending on the design’s complexity.”

Conclusion
While power optimization has been an important step forward for the industry, it is not the top of the pyramid. The industry has started to assess how it gets to being energy-aware, but that is not going to be an easy change to make. We have started to look at power from a task, scenario, and workload perspective, but the industry has to agree on the ways that this is going to be accomplished. If it is not going to use PSS, it needs to quickly work on an alternative. This is a gating function.

The industry then must make a concerted effort throughout the development flow, because without all stages of the flow being made energy-aware, accuracy will suffer. That means the industry will be slow to adopt it. Accuracy has held back power optimization for quite some time, and users in general still find large gaps between what was predicted and what turned out to be true in silicon. Maybe a focus on energy will lead to a greater understanding and more predictability.

Source: https://semiengineering.com/the-next-big-leap-energy-optimization/

Continue Reading

Semiconductor

The Next Phase Of Computing

Avatar

Published

on

Apple’s new M1 chip offers a glimpse of what’s ahead, and not just from Apple. Being able to get 18 to 20 hours of battery life from a laptop computer moves the ball much farther down the field in semiconductor design.

All of this is entirely dependent on the applications, of course. But what’s important here is how much battery life and performance can be gained by designing hardware specificially in conjunction with the software, rather than each being designed separately based upon some general-purpose connection scheme, such as a general-purpose chip, running a general-purpose OS, using general-purpose APIs.

The fact that the M1 chip is based on a 5nm process is good marketing, but that by itself does little for the overall device performance or energy efficiency. Just having more transistors packed on a die doesn’t mean much without incredibly fast interconnects between the ultra-dense processing elements and memories, or without an underlying power delivery network capable of getting enough power to all of those processing elements at the same time.

That Apple started out on the low end of its product line with the most expensive process technology is an indication it wants to fine-tune the system in the field for various applications before turning up the heat — literally and figuratively — on performance. In most cases, the most advanced technology goes into the highest-priced, highest-performing device, whether that’s a computer or a car, because the developer wants to recoup its investment as quickly as possible.

While the M1 chip includes a CPU, GPU and NPU, the interesting part will be what happens with customized acceleration for applications such as image and video processing. Apple develops all of this internally, so it has the ability to fine-tune just about everything.

But the company is hardly alone here. In the future, performance and power specs will become much harder to decipher because they will be tied increasingly to specific use cases. There are plenty of such use cases, and so far there are no clear leaders in the markets they will serve, in part because these markets are so new and in part because there has never been an option for this level of customization. The possibilities and number of options is growing exponentially.

Intel, AMD and Samsung all are heading in this direction. So is Huawei, based on chips from HiSilicon. Devices that do some level of computing — and that list is expanding, with rapidly blurring distinctions about what’s a computer and what isn’t due to the emphasis on smart everything — will need to fit into an acceptable power envelope. In the future, that also will include an energy envelope, slimming down processing to only what is required to run at a particular clock frequency, and doing that as efficiently as possible.

Future generations of devices will maximize throughput and access to memory, while optimizing compute cycles for the task at hand. In the future, much of this will be done dynamically as loads and algorithms shift, and as new IP is developed to take on some of these programming challenges.

Put in perspective, this represents a fundamental shift in design across a wide range of applications, which is why the entire tech industry is scrambling for more talent these days. The Apple M1 is a high profile example, but there is much more to come.

Ed Sperling

Ed Sperling

  (all posts)
Ed Sperling is the editor in chief of Semiconductor Engineering.

Source: https://semiengineering.com/the-next-phase-of-computing/

Continue Reading

Semiconductor

Week In Review: Auto, Security, Pervasive Computing

Avatar

Published

on

Automotive
Cadence achieved ASIL Level B in support of D (ASIL B(D))-compliant certification for its Tensilica ConnX B10 and ConnX B20 DSPs, which are designed for automotive radar, lidar, and vehicle-to-everything (V2X). SGS-TÜV Saar certified that the DSPs have support for random hardware faults and systematic faults.

Synopsys is acquiring Moortec, whose process, voltage, and temperature (PVT) sensors are used in-chip to monitor the health of chips during design, manufacture, test, and in system. Synopsys is adding the PVT sensors to its Silicon Lifecycle Management (SLM) platform to provide environmental data of a chip’s health in real time. This data will feed an analytics engine that can optimize operational activities that can improve yield, test, as well as safety, security and predictive maintenance capabilities. Synopsys did not disclose any financial details about the acquisition.

Imagination launched a new neural network accelerator (NNA) for advanced driver-assistance systems (ADAS) and autonomous driving. Called IMG Series4, the AI accelerator is a multicore architecture running at 600 TOPS (tera operations per second), at 12.5 TOPS per core in less than one watt, says Imagination in a press release. Tensor Tiling splits input data tensors into multiple tiles as a way to process data efficiently. The IP safety features and design process conforms to ISO 26262. Series4 will be available in December 2020.

Radsys used National Instruments’ Vehicle Radar Test System (VRTS) to help the Tsinghua University Suzhou Automobile Research Institute create China’s standard for testing vehicle millimeter wave (mmWave) automotive radar.

ON Semiconductor introduced a single point direct time-of-flight (dToF) lidar that uses its Silicon Photomultiplier (SiPM) sensor. The sensor overcomes some of the issues that lidar has ambient solar light and slow response time and is suitable for industrial proximity sensing.

Pervasive computing — Data centers, cloud, 5G, edge
Amazon will use its own Inferentia chip in its Alexa voice assistant, moving away from Nvidia chips, reports Reuters. Rekognition, Amazon’s face recognition service, will also start using Inferentia chips. Both services use the cloud — they access a data center to complete the transaction.

Synopsys says its Verification IP (VIP) for Compute Express Link (CXL) 2.0 is now available. CXL is an open standard interconnect technology for high-speed communications between CPUs and other chips that are used as accelerators. CXL is designed to improve data center performance. “The advancement of CXL as an open standard interconnect technology to accelerate next generation data center performance is our singular focus,” said Jim Pappas, chairman at CXL Consortium, in a press release. The IP is part of Synopsys’ cache coherency verification IP portfolio.

Graphcore designed its AI chip — the Colossus GC200 Intelligence Processing Unit (IPU) processor which is part of an AI platform — using Mentor, a Siemens business’ verification and testing tools and IP. The IPU has 59.4 billion transistors on a single 823sqmm die and is manufactured on TSMC’s 7nm process. Mentor was involved in circuit verification, PCB design, protocol verification, thermal analysis, design-for-test (DFT), and bring-up of the AI processor, according to a press release.

Xilinx and Samsung Electronics announced Samsung SmartSSD computational storage drive (CSD), which has Xilinx Kintex UltraScale+ FPGA accelerator with one million system logic cells and almost 2,000 DSP (digital signal processing) slices for hardware acceleration, according to a press release. The companies say this is the first adaptable computational storage platform for data centers.

Intel debuted a discrete GPU for data centers called Server GPU, based on X-LP microarchitecture for cloud gaming and media experiences.

Company milestones and wins
Brewer Science, which usually deals with advanced chemistry for semiconductor industry manufacturing, stepped up to make hand sanitizer for the local community to keep people safe during the COVID-19 pandemic.

Videos of the week

Job, Event and Webinar Boards: Find industry jobs and upcoming conferences and webinars all in one place on Semiconductor Engineering. Knowledge Center: Boost your semiconductor industry knowledge. Videos: See the latest Semiconductor Engineering videos.

Susan Rambo

  (all posts)
Susan Rambo is the managing editor of Semiconductor Engineering.

Source: https://semiengineering.com/week-in-review-auto-security-pervasive-computing-41/

Continue Reading

Semiconductor

Electronics For Quantum Communications

Avatar

Published

on

Moving from classic encryption algorithms with increasing key lengths to communication based on entangled quanta.

popularity

Our secure digital communications so far have functioned on the principle of key-based encryption. This involves generating a key of appropriate length, which is then used to encrypt the data. Because distributing the keys is difficult, the keys are reused rather than regularly generating new ones.

The regular use of the keys opens up the encryption process to attacks by mathematical methods. Protection against such attacks currently is afforded by appropriate key lengths, since the compute time required by the mathematical methods for key recovery increases exponentially with the key length. This means the key lengths must be adapted already today to the growing potential of computing technology.

However, the greatest danger of the keys used in current encryption methods being recovered comes from the use of quantum computers. Because developments in this area are proceeding rapidly, quantum computers that are capable of recovering current and future key lengths in fractions of a second soon could be available. This is possible because with quantum computers, key recovery time scales linearly with the key length rather than exponentially. Classic encryption algorithms would then no longer be secure, because lengthening the key would not offer additional security.

In anticipation of this situation, research has been under way for a number of years in the area of quantum communications. The focus here is on secure communication by means of entangled quanta (in the form of photons). This requires generating entangled quanta and sending one to the recipient while the other remains with the sender. The entangled quanta have special properties that are identical for both quanta. If a quantum is intercepted on the way to the recipient, and then fed back into the stream after manipulation, it loses the typical properties of the encrypted pair. Upon arrival at the recipient, the manipulation can be discovered by comparison with the quantum held by the sender.

The system designs for quantum communication are complex electrical-optical systems. A complex optical setup with (semi-) transparent mirrors is required to generate entangled photons. Various electronic components are also required to control the photon source that must frequently operate at extremely short time scales.

The photons are often detected using single photon detectors. The achievable energy levels are very low, and electronic components are required for analyzing such low energy levels. Furthermore, the analysis electronics must operate with extreme speed – analysis rates in the GHz range are often required.

High-precision instruments are also required for measuring the arrival time of the voltage pulse from the single photon detector. Various mathematical methods are needed to recover the individual photon states in order to ensure that the received photon retains the same state as its counterpart held by the sender. Complex signal processors are used here, which are frequently designed as a combination of an FPGA and a DSP.

The required electronics are currently built from individual components. If quantum communication is to become standard, however, the electronic components must be implemented in just a few circuits. Work is currently beginning on the first subcomponents, such as fast analog-digital converters (ADCs) together with the digital analysis electronics consisting of FPGA and DSP.

Andy Heinig

Andy Heinig

  (all posts)
Andy Heinig is general manager for system integration at at Fraunhofer Institute of Integrated Circuits, Division of Engineering and Adaptive Systems.

Source: https://semiengineering.com/electronics-for-quantum-communications/

Continue Reading
Energy8 hours ago

Ballard Closes US$402.5 Million Bought Deal Offering of Common Shares

Energy8 hours ago

Lithium-ion Battery Market Size USD 129.3 Billion By 2027 At A CAGR of 18.0% | Valuates Reports

Energy8 hours ago

CBAK Energy and Kandi Group Signed Supply Framework Agreement

Energy9 hours ago

Level Sensors Market worth $6.1 billion by 2025 – Exclusive Report by MarketsandMarkets™

Energy9 hours ago

Rescheduling of Work Commitments in Corentyne Block, Guyana

Cyber Security10 hours ago

Technological Innovations at the Tokyo Olympics

Gaming14 hours ago

The Basics: Popular Casino Games You Should Play

Press Releases14 hours ago

The International Vaccine Institute Supports a Global Campaign to Reduce the Spread of Covid-19

Esports15 hours ago

T1 re-signs top laner Canna, extends contract until 2022

Energy15 hours ago

JinkoSolar to Report Third Quarter 2020 Results on December 7, 2020

Energy15 hours ago

EWPG Holding AB: Interim report for the period 1 January – 30 September 2020

Esports17 hours ago

Khan joins DAMWON as team’s new top laner

Cyber Security17 hours ago

Different ways tech plays a key role in securing igaming platforms

Energy21 hours ago

LONGi suministra 273MW de sus módulos solares a la mayor planta solar del Sureste de Asia

Ecommerce23 hours ago

ADvendio Celebrates 10 Years of Product Excellence and Growth

Esports23 hours ago

Super Smash Bros. Melee Slippi mod launches broadcast feature early in response to #FreeMelee

Esports23 hours ago

The 7 best low-back gaming chairs

Denmark
Esports23 hours ago

Heroic move past Endpoint in BLAST Premier Showdown

Esports1 day ago

Na’Vi brings Mag back from inactive roster to coach Dota 2 team

Esports1 day ago

Fantasy games live for DreamHack Masters Winter and Flashpoint 2 playoffs

Energy1 day ago

Worldwide Hybrid Diesel Genset Industry to 2026 – Key Drivers and Restraints

Energy1 day ago

The Neutrino Energy Group Transcends the Theoretical to Transform Practical Energy Use Worldwide

Esports1 day ago

How to activate crossplay on Rainbow Six Siege

Esports1 day ago

Pokémon teases 25th-anniversary celebration during Macy’s Thanksgiving Day Parade

Energy1 day ago

ChemPoint es seleccionado como distribuidor de los productos de Soluciones Especializadas de DuPont para México

Esports1 day ago

Apex Legends Dev Believes Wattson Isn’t ‘Useless’

Esports1 day ago

Horizon Voice Actor Shows What it Was Like to Record During the Pandemic

United States
Esports1 day ago

Liquid edge past MAD Lions in BLAST Premier Fall Showdown

Energy1 day ago

Europe Excavator Market Outlook Report 2020-2025 Featuring Prominent Players – Caterpillar, CNH, John Deere, Kobelco, Liebherr

Esports1 day ago

League’s original 17 champions and how different they were 11 years ago

Trending