Connect with us

Quantum

Major funding announcement heralds new phase of UK quantum secure communications research and development

Avatar

Published

on

11 July 2019

The UK Quantum Communications Hub has been awarded further funding of approximately £24M to deliver its vision of integrated quantum secured communications. Funded through the UK National Quantum Technologies Programme as part of a national network of four quantum technology hub consortia, the Quantum Communications Hub is currently completing the first phase of its five-year programme due to finish in November. Achievements during this period include the establishment of the UK’s first Quantum Network, a ground-breaking chip-to-chip quantum secured communication system for data encryption and other applications, the delivery of handheld or “commercial” short-range, free-space systems, and major advances in next generation quantum communication technologies.

The theme of the new phase of the Hub is one of expansion, delivering quantum secured communication technologies at all distance scales and offering a range of applications and services with the potential for integration with existing infrastructure. Technologies and methods widely used today will be vulnerable to emerging quantum computing technologies, so currently distributed, conventionally encrypted information with a requirement for long-term security will be at risk in the future. New “quantum-safe” practical and cost-effective methods have to be developed that are not vulnerable to any future quantum technology.

As part of the work plan for the next five years, the Quantum Communications Hub aims to deliver a wide range of technologies fundamentally based on the concept of Quantum Key Distribution (QKD) for the secure distribution of secret keys. During its second phase, the Hub will be engineering “many-to-one” flexible, handheld technologies to enhance practicality and real-world operation in short-distance communication scenarios. The UK Quantum Network (UKQN), a Hub established national facility for communications at city, metropolitan or inter-city scale, will be enhanced with added capability and new QKD technologies – using quantum light analogous to that used in conventional communications, or using entanglement working towards even longer distance fibre communications. A new programme of work developing ground to satellite QKD links will be developed to address longest distance communications, intercontinental and across oceans. Commercial QKD technologies for all distance scales will require miniaturisation for size, weight and power savings, and to enable mass manufacture. The Hub will therefore address key engineering challenges for on-chip operation and integration, while also establishing national capability, both in quantum communication technologies and their key components such as light sources and detectors. Finally, security – of devices, systems and end-to-end – will comprise a cross-cutting theme across all technology approaches incorporating work on metrology, calibration and industrial standards; cryptographic analysis of quantum and post-quantum technologies; and the undertaking of security analysis, vulnerability analysis and testing, and the development of countermeasures – all from the perspective of providing practical and secure applications and services.

The partnership is led again by the University of York and Professor Tim Spiller as the Director. As with its programme of work, the research consortium has been expanded significantly to include the Universities of Bristol, Cambridge, Glasgow, Heriot Watt, Kent, Oxford, Queen’s at Belfast, Sheffield, Strathclyde as well as the National Physical Laboratory and RAL Space. Industrial partners include BT, ADVA, ArQit, IDQ, Teledyne e2v, Fraunhofer UK, Toshiba, Cognizant, AAC Microtec Clydespace, Craft Prospect, and many more. The Hub project is being funded by the Engineering and Physical Sciences Research Council (EPSRC), part of UKRI.

Source: https://www.quantumcommshub.net/news/major-funding-announcement-heralds-new-phase-of-uk-quantum-secure-communications-research-and-development/

Quantum

Sleep Evolved Before Brains. Hydras Are Living Proof.

Avatar

Published

on

The hydra is a simple creature. Less than half an inch long, its tubular body has a foot at one end and a mouth at the other. The foot clings to a surface underwater — a plant or a rock, perhaps — and the mouth, ringed with tentacles, ensnares passing water fleas. It does not have a brain, or even much of a nervous system.

And yet, new research shows, it sleeps. Studies by a team in South Korea and Japan showed that the hydra periodically drops into a rest state that meets the essential criteria for sleep.

On the face of it, that might seem improbable. For more than a century, researchers who study sleep have looked for its purpose and structure in the brain. They have explored sleep’s connections to memory and learning. They have numbered the neural circuits that push us down into oblivious slumber and pull us back out of it. They have recorded the telltale changes in brain waves that mark our passage through different stages of sleep and tried to understand what drives them. Mountains of research and people’s daily experience attest to human sleep’s connection to the brain.

But a counterpoint to this brain-centric view of sleep has emerged. Researchers have noticed that molecules produced by muscles and some other tissues outside the nervous system can regulate sleep. Sleep affects metabolism pervasively in the body, suggesting that its influence is not exclusively neurological. And a body of work that’s been growing quietly but consistently for decades has shown that simple organisms with less and less brain spend significant time doing something that looks a lot like sleep. Sometimes their behavior has been pigeonholed as only “sleeplike,” but as more details are uncovered, it has become less and less clear why that distinction is necessary.

It appears that simple creatures — including, now, the brainless hydra — can sleep. And the intriguing implication of that finding is that sleep’s original role, buried billions of years back in life’s history, may have been very different from the standard human conception of it. If sleep does not require a brain, then it may be a profoundly broader phenomenon than we supposed.

Recognizing Sleep

Sleep is not the same as hibernation, or coma, or inebriation, or any other quiescent state, wrote the French sleep scientist Henri Piéron in 1913. Though all involved a superficially similar absence of movement, each had distinctive qualities, and that daily interruption of our conscious experience was particularly mysterious. Going without it made one foggy, confused, incapable of clear thought. For researchers who wanted to learn more about sleep, it seemed essential to understand what it did to the brain.

And so, in the mid-20th century, if you wanted to study sleep, you became an expert reader of electroencephalograms, or EEGs. Putting electrodes on humans, cats or rats allowed researchers to say with apparent precision whether a subject was sleeping and what stage of sleep they were in. That approach produced many insights, but it left a bias in the science: Almost everything we learned about sleep came from animals that could be fitted with electrodes, and the characteristics of sleep were increasingly defined in terms of the brain activity associated with them.

This frustrated Irene Tobler, a sleep physiologist working at the University of Zurich in the late 1970s, who had begun to study the behavior of cockroaches, curious whether invertebrates like insects sleep as mammals do. Having read Piéron and others, Tobler knew that sleep could be defined behaviorally too.

She distilled a set of behavioral criteria to identify sleep without the EEG. A sleeping animal does not move around. It is harder to rouse than one that’s simply resting. It may take on a different pose than when awake, or it may seek out a specific location for sleep. Once awakened it behaves normally rather than sluggishly. And Tobler added a criterion of her own, drawn from her work with rats: A sleeping animal that has been disturbed will later sleep longer or more deeply than usual, a phenomenon called sleep homeostasis.

Tobler soon laid out her case that cockroaches were either sleeping or doing something very like it. The response from her colleagues, most of whom studied higher-order mammals, was immediate. “It was heresy to even consider this,” Tobler said. “They really made fun of me in my early years. It wasn’t very pleasant. But I sort of felt time would tell.” She studied scorpions, giraffes, hamster, cats — 22 species in all. She was convinced that science would eventually confirm that sleep was widespread, and in later studies of sleep, her behavioral criteria would prove critical.

Those criteria were on the minds of Amita Sehgal at the University of Pennsylvania School of Medicine, Paul Shaw (now at Washington University School of Medicine in St. Louis) and their colleagues in the late 1990s. They were part of two independent groups that had begun to look closely at the quiescence of fruit flies. Sleep was still largely the domain of psychologists, Sehgal says, rather than scientists who studied genetics or cell biology. With respect to mechanisms, from a molecular biologist’s perspective, “the sleep field was sleeping,” she said.

However, the neighboring field of circadian clock biology was exploding with activity, following the discovery of genes that regulate the body’s 24-hour clock. If molecular mechanisms behind sleep could be uncovered — if a well-understood model organism like the fruit fly could be used to study them — then there was the potential for a revolution in sleep science as well. Flies, like Tobler’s cockroaches and scorpions, could not be easily hooked up to an EEG machine. But they could be observed minutely, and their responses to deprivation could be recorded.

With Less and Less Brain

In January 2000, Sehgal and her colleagues published their paper asserting that flies were sleeping. That March, Shaw and colleagues published their parallel work confirming the claim. The field was still reluctant to admit that true sleep existed in invertebrates, and that human sleep could be usefully studied using flies, Shaw says. But the flies proved their worth. Today more than 50 labs use flies to study sleep, generating findings that suggest that sleep has a set of core features present across the animal kingdom. And biologists did not stop with flies. “Once we showed that flies slept,” Shaw said, “then it became possible to say that anything slept.”

The sleep that researchers studied in other species was not always similar to the standard human variety. Dolphins and migrating birds can send half their brain to sleep while appearing awake, researchers realized. Elephants spend almost every hour awake, while little brown bats spend almost every hour asleep.

In 2008, David Raizen and his colleagues even reported sleep in Caenorhabditis elegans, the roundworm widely used as a model organism in biology laboratories. They have only 959 body cells (apart from their gonads), with 302 neurons that are mostly gathered in several clusters in the head. Unlike many other creatures, C. elegans does not sleep for a portion of every day of its life. Instead, it sleeps for short bouts during its development. It also sleeps after periods of stress as an adult.

The evidence for sleep in creatures with minimal nervous systems seemed to reach a new high about five years ago with studies of jellyfish. The Cassiopea jellies, about four inches long, spend most of their time upside down, tentacles reaching toward the ocean surface, and pulsating to push seawater through their bodies. When Michael Abrams, now a fellow at the University of California, Berkeley, and two other graduate students at the California Institute of Technology asked if Cassiopea might sleep, they were continuing the line of inquiry that Tobler had followed when she studied cockroaches, investigating whether sleep exists in ever simpler organisms. If jellyfish sleep, that suggests sleep may have evolved more than 1 billion years ago and could be a fundamental function of almost all organisms in the animal kingdom, many of which do not have brains.

That’s because, among animals, jellyfish are evolutionarily about as far away as you can get from mammals. Their neighbors in the tree of life include the sponges, which spend their lives attached to rocks in the ocean, and placozoans, tiny clusters of cells first seen by scientists on the walls of seawater aquariums. Unlike other creatures observed sleeping, Cassiopea have no brain, no centralized nervous system. But they can move, and they have periods of rest. It should be possible, the Cal Tech students reasoned, to apply the criteria for behavioral sleep to them.

The first few boxes were relatively easy to check. Although the jellyfish pulsed night and day, Abrams and his collaborators showed that the rate of pulsing slowed in a characteristic way at night, and that animals could be roused from this state with some effort. (There were also indications that the jellyfish favored a particular posture on a platform in the tank during these quieter periods, but Abrams considers that evidence to still be anecdotal.) Testing whether the jellyfish had sleep homeostasis was much harder and required finding ways to gently disturb them without distressing them. In the end, Abrams and his collaborators settled on dropping the platform out from underneath them; when that happened, the Cassiopea would sink and rise again, pulsing at their daytime rate.

Later, the telltale signs of homeostatic regulation were there: The more the jellyfish were disturbed, the less the creatures moved the next day. “We weren’t sold on it until we saw the homeostatic regulation,” Abrams said. The team’s results were published in 2017, and Abrams has continued to probe the jellyfish’s genetics and neuroscience since then.

Sleeping in Context

The new revelations about sleep in hydras push the sleep discoveries to a new extreme. The hydra’s body and nervous system are even more rudimentary than Cassiopea’s. Yet as the researchers from Kyushu University in Japan and Ulsan National Institute of Science and Technology in South Korea demonstrated, once a hydra entered a rest state, a pulse of light would rouse it, and it too slept longer after repeated deprivation, among other findings.

Hydra sleep has its peculiarities: Dopamine, which usually makes animals sleep less, caused the hydra to go still. The hydra does not seem to sleep on a 24-hour cycle, instead spending part of every four hours asleep. Something about the hydra’s way of life may have made these traits advantageous, Tobler suggests.

But despite those differences, hydra sleep may overlap with other animals’ sleep at the genomic level. When the researchers looked for gene activity altered by sleep deprivation in hydras, they saw a few familiar ones. “At least some genes conserved in other animals are involved in sleep regulation in hydra,” wrote Taichi Itoh, an assistant professor at Kyushu University and a leader of the new study, in an email to Quanta. That finding suggests that the Cnidaria phylum of animals, which includes hydras and jellyfish, already had some genetic components of sleep regulation before it diverged from the ancestors of other groups of animals. As those animals gradually evolved centralized nervous systems, sleep may have taken on new functions for maintaining them.

What, then, does sleep do in the absence of a brain? Raizen suspects that at least for some animals, sleep has a primarily metabolic function, allowing certain biochemical reactions to take place that can’t happen during waking hours. It may divert the energy that would be used by alertness and movement into other processes, ones that are too costly to take place while the animal is awake. For example, C. elegans seems to use sleep to enable the growth of its body and support the repair of its tissues. In sleep-deprived hydras, the cell divisions that are part of everyday life are paused. Something similar has been seen in the brains of sleep-deprived rats and in fruit flies. Managing the flow of energy may be a central role for sleep.

All this research on very simple sleepers raises questions about the very first organism that slept. This first sleeper, whatever it was, probably vanished more than 1 billion years ago. If it was the common ancestor between hydras and humans, it likely had neurons and something like muscle that enabled it to move — and the absence of that movement was characteristic of its version of sleep, fulfilling its special needs.

“If that animal slept, sleep was for whatever that context was,” Abrams said. Sleep might have helped to maintain the first sleeper’s rudimentary nervous system, but it could just as easily have been for the benefits of its metabolism or digestion. “Before we had a brain, we had a gut,” he said.

Even deeper questions are now being asked. In a 2019 opinion paper, Raizen and his co-authors wondered: If sleep happens in neurons, then what is the minimum number of neurons that can sleep? Can the need for sleep be driven by other kinds of cells, as work implicating liver and muscle cells suggests?

“If you really want to push the envelope, do animals that do not have neurons at all sleep?” Raizen asked.

In fact, there are a few organisms whose behavior might someday reveal the answer. Placozoans, the microscopic multicellular creatures that seem to be among the simplest in the animal kingdom, move and react to their surroundings. They have no neurons and no muscles. Neither do sponges, which are anchored in place but still respond to their environment.

“I’m often asked, ‘Do sponges sleep?’” said Abrams. “That’s a whole new world. There might be ways to test that.”

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.quantamagazine.org/sleep-evolved-before-brains-hydras-are-living-proof-20210518/

Continue Reading

Quantum

Can Machines Control Our Brains?

Avatar

Published

on

The raging bull locked its legs mid-charge. Digging its hooves into the ground, the beast came to a halt just before it would have gored the man. Not a matador, the man in the bullring standing eye-to-eye with the panting toro was the Spanish neuroscientist José Manuel Rodriguez Delgado, in a death-defying public demonstration in 1963 of how violent behavior could be squelched by a radio-controlled brain implant. Delgado had pressed a switch on a hand-held radio transmitter to energize electrodes implanted in the bull’s brain. Remote-controlled brain implants, Delgado argued, could suppress deviant behavior to achieve a “psychocivilized society.”

Unsurprisingly, the prospect of manipulating the human mind with brain implants and radio beams ignited public fears that curtailed this line of research for decades. But now there is a resurgence using even more advanced technology. Laser beams, ultrasound, electromagnetic pulses, mild alternating and direct current stimulation and other methods now allow access to, and manipulation of, electrical activity in the brain with far more sophistication than the needlelike electrodes Delgado stabbed into brains.

Billionaires Elon Musk of Tesla and Mark Zuckerberg of Facebook are leading the charge, pouring millions of dollars into developing brain-computer interface (BCI) technology. Musk says he wants to provide a “superintelligence layer” in the human brain to help protect us from artificial intelligence, and Zuckerberg reportedly wants users to upload their thoughts and emotions over the internet without the bother of typing. But fact and fiction are easily blurred in these deliberations. How does this technology actually work, and what is it capable of?

Already in 1964, Delgado’s technology could induce a surprising amount of control in human brains. Simply by energizing implanted electrodes, he could quell a raging brain storm mid-seizure, or suppress mental illnesses in an instant — but he could also command a person’s limbs to move, overwhelm a person with sexual ecstasy or plunge them into deep, suicidal despair. No wonder people got nervous about this technology.

Even recently, widely respected neuroscientists have sounded the alarm. A cautionary editorial, published in 2017 in Nature, opens with a scene that could have been found in an episode of Black Mirror, a show whose plots often center on mind control technology. The neuroscientists describe a scenario in which a brain implant that enables a paralyzed man to control a prosthetic arm suddenly goes haywire because the man feels frustrated, and it attacks an assistant with its steely claws.

I find this Frankenstein scenario ridiculous. Electrodes placed in the motor cortex to activate prosthetic limb movement do not access emotion. Moreover, no matter what you may read in sensational articles, neuroscientists do not yet understand how thoughts, emotions and intentions are coded in the pattern of neural impulses zipping through neural circuits: The biological obstacles of mind hacking are far greater than the technological challenges.

Today’s BCI devices work by analyzing data, in much the same way that Amazon tries to predict what book you might want next. Computers monitoring streams of electrical activity, picked up by a brain implant or a removable electrode cap, learn to recognize how the traffic pattern changes when a person makes an intended limb movement.

For example, the ongoing oscillations in electrical activity surging through the cerebral cortex, known as brain waves, are suddenly suppressed when a person moves a limb — or even thinks about moving it. This phenomenon reflects an abrupt change in communication among thousands of neurons, like the sudden hush in a restaurant after a server drops a glass: You cannot understand conversations between individual diners, but the collective hush is a clear signal. Scientists can use the interruption in electrical traffic in the cerebral cortex to trigger a computer to activate a motor in a prosthetic arm, or to click a virtual mouse on a computer screen. But even when it is possible to tap into an individual neuron with microelectrodes, neuroscientists can’t decode its neuronal firing as if it were so much computer code; they have to use machine learning to recognize patterns in the neuron’s electrical activity that correlate with a behavioral response. Such BCIs operate by correlation, much the way we depress the clutch in a car by listening to the sound of the engine.

And just as race car drivers shift gears with precision, this correlational approach of interfacing human and machine can be very effective. Prosthetic devices that match the brain’s electrical activity with sensorimotor function can prove life-changing, restoring some lost function and independence to people who are paralyzed or who suffer other neurological losses.

But there’s more than fancy technology at work in BCI devices — the brain itself plays a huge role. Through a prolonged trial-and-error process the brain is somehow rewarded by seeing the intended response occur, and over time it learns to generate the electrical signal it knows the computer will recognize. All of this takes place beneath the level of consciousness, and neuroscientists don’t really know how the brain accomplishes it. It’s a pretty far cry from the sensational fears and promises that accompany the specter of mind control.

For the sake of argument, however, let’s imagine that we do learn how information is encoded in neuronal firing patterns. Then, in true Black Mirror fashion, let’s say we want to insert a foreign thought via brain implant. We still have to overcome many obstacles, according to the neuroscientist Timothy Buschman, who is actively pursuing research using brain recording and stimulation. “I will know which brain region to target, but there is no way I will know which neuron,” he told me in his lab at Princeton University. “Even if I could target the same neuron in every individual, what that neuron does will be different in different individuals’ brains.”

No matter how much industrial power someone like Musk brings to the problem, Buschman explained mathematically that biology, not technology, is the real bottleneck. Even if we oversimplify neural coding by assigning a neuron to be either “on” or “off,” in a network of only 300 neurons we still have 2300 possible states — more than all the atoms in the known universe. “It is an impossible number of states,” Buschman said.

Ponder for a minute that the human brain has about 85 billion neurons.

But what about Zuckerberg’s plans of users uploading thoughts and emotions? Reading information out of the brain is more feasible than downloading information into it, after all. Indeed, Marcel Just and his colleagues at Carnegie Mellon University are now using fMRI to reveal a person’s private thoughts, in an effort to understand how the brain processes, stores and recalls information. They can tell what number a person is thinking of, what emotion they may be feeling or whether they are having thoughts of suicide. This brain-machine mentalism works by asking people to have a specific thought or cognitive experience over and over while inside an fMRI machine. Since cognition and emotion activate specific sets of networks throughout the brain, machine learning can eventually identify which constellations of brain activity patterns correlate with specific thoughts or emotions. Remarkably, the brainwide activity patterns revealing private thoughts are universal, regardless of a person’s native language.

A surprising finding from this research is that the brain does not store information the way we might think — as discrete items categorized logically in a database. Instead, information is encoded as integrated concepts that encapsulate all the sensations, emotions, relevant experiences and significance associated with an item. The words “spaghetti” and “apple” are logically similar in being food items, but each one has a different feel that activates a unique constellation of brain regions. This explains how Just can use the very slow method of fMRI, which takes many minutes to acquire brain images, to determine what sentence a person is reading. The brain does not decode and store written information word by word, the way Google Translate does: It encodes the meaning of the sentence in its entirety.

This technological mind reading might seem scary. “Nothing is more private than a thought,” Just said. But such fears are simply not grounded in fact. Similar to the BCI used to operate a prosthetic device, this mind reading requires intense cooperation and effort by the participant. People can easily defeat it, Just’s colleague Vladimir Cherkassky explained. “We need the person to think about an apple six times. So all they have to do is think about a red apple the first time, a green apple the next time, maybe a Macintosh computer, and we are done.”

Critics often cite ethical concerns with BCI: loss of privacy, identity, agency and consent. They worry about abuses to enhance performance or the destruction of free will, and they raise concerns over disparities within society that reduce access to the technology. And, yes, as with any technology it’s possible that bad actors can use it to cause deliberate harm. These are all good points, worth consideration as the technology improves. But it’s also worth remembering that we already face and accept such concerns from other biomedical advances, such as DNA sequencing, anesthesia and neurosurgery.

To me, the harm BCI might someday do is outweighed by the good it’s already doing. Current methods of treating neurological and psychological disorders with chemicals or surgery are woefully inadequate. Interfacing with the brain through the precise application of electricity and diagnosing disorders by monitoring the brain’s electrical activity shows great promise. When Nathan Copeland shook President Obama’s hand with a robotic arm controlled by electrodes implanted in his motor cortex, he also felt the grip of a handshake through sensors in the prosthetic fingers that stimulated electrodes in his sensory cortex. BCI can also restore vision and hearing, generate synthetic speech, and help treat disorders like obsessive-compulsive disorder, addiction and Parkinson’s disease.

It is natural to fear what we do not understand. For most of us, fear of mind control is an abstraction, but Copeland faced the reality of letting scientists open his skull and implant electrodes in his brain. When I met him in 2018, Copeland’s brain implants had been removed, because the electrodes have a limited lifetime.  “Looking back at it,” he said, “I would do it as many times as they would let me.”

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.quantamagazine.org/how-brain-computer-interface-technology-is-different-from-mind-control-20210517/

Continue Reading

Quantum

Contextual Subspace Variational Quantum Eigensolver

Avatar

Published

on

William M. Kirby1, Andrew Tranter1,2, and Peter J. Love1,3

1Department of Physics and Astronomy, Tufts University, Medford, MA 02155
2Cambridge Quantum Computing, 9a Bridge Street Cambridge, CB2 1UB United Kingdom
3Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973

Find this paper interesting or want to discuss? Scite or leave a comment on SciRate.

Abstract

We describe the $textit{contextual subspace variational quantum eigensolver}$ (CS-VQE), a hybrid quantum-classical algorithm for approximating the ground state energy of a Hamiltonian. The approximation to the ground state energy is obtained as the sum of two contributions. The first contribution comes from a noncontextual approximation to the Hamiltonian, and is computed classically. The second contribution is obtained by using the variational quantum eigensolver (VQE) technique to compute a contextual correction on a quantum processor. In general the VQE computation of the contextual correction uses fewer qubits and measurements than the VQE computation of the original problem. Varying the number of qubits used for the contextual correction adjusts the quality of the approximation. We simulate CS-VQE on tapered Hamiltonians for small molecules, and find that the number of qubits required to reach chemical accuracy can be reduced by more than a factor of two. The number of terms required to compute the contextual correction can be reduced by more than a factor of ten, without the use of other measurement reduction schemes. This indicates that CS-VQE is a promising approach for eigenvalue computations on noisy intermediate-scale quantum devices.

The variational quantum eigensolver (VQE) is a quantum simulation algorithm that estimates the ground state energy of a system, given its Hamiltonian. The quantum computer is used to prepare a guess or “ansatz” for the ground state, and to evaluate its energy. A classical computer is then used to vary the ansatz, and this whole process is repeated, ideally until the energy approaches its global minimum, the ground state energy.
Contextuality is a feature of quantum mechanics that does not appear in classical physics. A system is contextual when one cannot model its observables as having preexisting values before measurement. Applied to VQE, contextuality is a property that the set of measurements involved in evaluating energies may or may not possess. When the set of measurements is noncontextual, it can be described by a classical statistical model, but when it is contextual, such models are generally ruled out.
In this work, we showed how to take a VQE instance and partition it into a noncontextual part and a remaining part that in general is contextual. The noncontextual part can be simulated classically, and the contextual part, which we can think of as encoding the “intrinsically quantum part” of the original problem, is simulated using VQE. We call this algorithm contextual subspace VQE or CS-VQE, and it is an example of a genuinely hybrid quantum-classical algorithm where part of the solution is obtained using a classical computer and part is obtained using a quantum computer.
Since the contextual part is only a subset of the original problem, the VQE algorithm it requires uses fewer qubits and measurements than the original problem, in general. We can vary the size of the contextual part to trade off use of more qubits and measurements for better accuracy in the overall approximation. We tested this for electronic structure Hamiltonians of various atoms and small molecules: in some cases we reached useful accuracy using fewer than half as many qubits as standard VQE, and in nearly all cases at least one qubit was saved. In summary, by using contextuality to isolate the “intrinsically quantum part” of a VQE instance, we can save quantum resources while still taking advantage of those that are available on our quantum computer.

► BibTeX data

► References

[1] A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, Nature Communications 5, 4213 EP (2014).
https:/​/​doi.org/​10.1038/​ncomms5213

[2] P. J. J. O’Malley, R. Babbush, I. D. Kivlichan, J. Romero, J. R. McClean, R. Barends, J. Kelly, P. Roushan, A. Tranter, N. Ding, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. G. Fowler, E. Jeffrey, E. Lucero, A. Megrant, J. Y. Mutus, M. Neeley, C. Neill, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, P. V. Coveney, P. J. Love, H. Neven, A. Aspuru-Guzik, and J. M. Martinis, Phys. Rev. X 6, 031007 (2016).
https:/​/​doi.org/​10.1103/​PhysRevX.6.031007

[3] R. Santagati, J. Wang, A. A. Gentile, S. Paesani, N. Wiebe, J. R. McClean, S. Morley-Short, P. J. Shadbolt, D. Bonneau, J. W. Silverstone, D. P. Tew, X. Zhou, J. L. O’Brien, and M. G. Thompson, Science Advances 4 (2018).
https:/​/​doi.org/​10.1126/​sciadv.aap9646

[4] Y. Shen, X. Zhang, S. Zhang, J.-N. Zhang, M.-H. Yung, and K. Kim, Physical Review A 95, 020501 (2017).
https:/​/​doi.org/​10.1103/​PhysRevA.95.020501

[5] S. Paesani, A. A. Gentile, R. Santagati, J. Wang, N. Wiebe, D. P. Tew, J. L. O’Brien, and M. G. Thompson, Phys. Rev. Lett. 118, 100503 (2017).
https:/​/​doi.org/​10.1103/​PhysRevLett.118.100503

[6] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Nature 549, 242 (2017).
https:/​/​doi.org/​10.1038/​nature23879

[7] C. Hempel, C. Maier, J. Romero, J. McClean, T. Monz, H. Shen, P. Jurcevic, B. P. Lanyon, P. Love, R. Babbush, A. Aspuru-Guzik, R. Blatt, and C. F. Roos, Phys. Rev. X 8, 031022 (2018).
https:/​/​doi.org/​10.1103/​PhysRevX.8.031022

[8] E. F. Dumitrescu, A. J. McCaskey, G. Hagen, G. R. Jansen, T. D. Morris, T. Papenbrock, R. C. Pooser, D. J. Dean, and P. Lougovski, Phys. Rev. Lett. 120, 210501 (2018).
https:/​/​doi.org/​10.1103/​PhysRevLett.120.210501

[9] J. I. Colless, V. V. Ramasesh, D. Dahlen, M. S. Blok, M. E. Kimchi-Schwartz, J. R. McClean, J. Carter, W. A. de Jong, and I. Siddiqi, Phys. Rev. X 8, 011021 (2018).
https:/​/​doi.org/​10.1103/​PhysRevX.8.011021

[10] Y. Nam, J.-S. Chen, N. C. Pisenti, K. Wright, C. Delaney, D. Maslov, K. R. Brown, S. Allen, J. M. Amini, J. Apisdorf, K. M. Beck, A. Blinov, V. Chaplin, M. Chmielewski, C. Collins, S. Debnath, K. M. Hudek, A. M. Ducore, M. Keesan, S. M. Kreikemeier, J. Mizrahi, P. Solomon, M. Williams, J. D. Wong-Campos, D. Moehring, C. Monroe, and J. Kim, npj Quantum Information 6, 33 (2020).
https:/​/​doi.org/​10.1038/​s41534-020-0259-3

[11] C. Kokail, C. Maier, R. van Bijnen, T. Brydges, M. K. Joshi, P. Jurcevic, C. A. Muschik, P. Silvi, R. Blatt, C. F. Roos, and P. Zoller, Nature 569, 355 (2019).
https:/​/​doi.org/​10.1038/​s41586-019-1177-4

[12] A. Kandala, K. Temme, A. D. Córcoles, A. Mezzacapo, J. M. Chow, and J. M. Gambetta, Nature 567, 491 (2019).
https:/​/​doi.org/​10.1038/​s41586-019-1040-7

[13] Google AI Quantum and Collaborators, Science 369, 1084 (2020).
https:/​/​doi.org/​10.1126/​science.abb9811

[14] W. M. Kirby and P. J. Love, Phys. Rev. Lett. 123, 200501 (2019).
https:/​/​doi.org/​10.1103/​PhysRevLett.123.200501

[15] W. M. Kirby and P. J. Love, Phys. Rev. A 102, 032418 (2020).
https:/​/​doi.org/​10.1103/​PhysRevA.102.032418

[16] R. W. Spekkens, Phys. Rev. A 75, 032110 (2007).
https:/​/​doi.org/​10.1103/​PhysRevA.75.032110

[17] R. W. Spekkens, “Quasi-quantization: Classical statistical theories with an epistemic restriction,” in Quantum Theory: Informational Foundations and Foils, edited by G. Chiribella and R. W. Spekkens (Springer Netherlands, Dordrecht, 2016) pp. 83–135.
https:/​/​doi.org/​10.1007/​978-94-017-7303-4_4

[18] K. M. Nakanishi, K. Mitarai, and K. Fujii, Phys. Rev. Research 1, 033062 (2019).
https:/​/​doi.org/​10.1103/​PhysRevResearch.1.033062

[19] J. S. Bell, Physics 1, 195 (1964).
https:/​/​doi.org/​10.1103/​PhysicsPhysiqueFizika.1.195

[20] J. S. Bell, Rev. Mod. Phys. 38, 447 (1966).
https:/​/​doi.org/​10.1103/​RevModPhys.38.447

[21] S. Kochen and E. Specker, J. Math. Mech. 17, 59 (1967).
https:/​/​doi.org/​10.1007/​978-94-010-1795-4_17

[22] R. W. Spekkens, Phys. Rev. A 71, 052108 (2005).
https:/​/​doi.org/​10.1103/​PhysRevA.71.052108

[23] S. Abramsky and A. Brandenburger, New Journal of Physics 13, 113036 (2011).
https:/​/​doi.org/​10.1088/​1367-2630/​13/​11/​113036

[24] R. Raussendorf, Phys. Rev. A 88, 022322 (2013).
https:/​/​doi.org/​10.1103/​PhysRevA.88.022322

[25] M. Howard, J. Wallman, V. Veitch, and J. Emerson, Nature 510, 351 EP (2014).
https:/​/​doi.org/​10.1038/​nature13460

[26] A. Cabello, S. Severini, and A. Winter, Phys. Rev. Lett. 112, 040401 (2014).
https:/​/​doi.org/​10.1103/​PhysRevLett.112.040401

[27] A. Cabello, M. Kleinmann, and C. Budroni, Phys. Rev. Lett. 114, 250402 (2015).
https:/​/​doi.org/​10.1103/​PhysRevLett.114.250402

[28] R. Ramanathan and P. Horodecki, Phys. Rev. Lett. 112, 040404 (2014).
https:/​/​doi.org/​10.1103/​PhysRevLett.112.040404

[29] N. de Silva, Phys. Rev. A 95, 032108 (2017).
https:/​/​doi.org/​10.1103/​PhysRevA.95.032108

[30] B. Amaral and M. T. Cunha, arXiv preprint (2017), arXiv:1709.04812 [quant-ph].
arXiv:1709.04812

[31] Z.-P. Xu and A. Cabello, Phys. Rev. A 99, 020103 (2019).
https:/​/​doi.org/​10.1103/​PhysRevA.99.020103

[32] R. Raussendorf, Quantum Information and Computation 19, 1141 (2019).
https:/​/​doi.org/​10.26421/​QIC19.13-14-4

[33] C. Duarte and B. Amaral, Journal of Mathematical Physics 59, 062202 (2018).
https:/​/​doi.org/​10.1063/​1.5018582

[34] S. Mansfield and E. Kashefi, Phys. Rev. Lett. 121, 230401 (2018).
https:/​/​doi.org/​10.1103/​PhysRevLett.121.230401

[35] C. Okay, E. Tyhurst, and R. Raussendorf, Quantum Information and Computation 18, 1272 (2018).
https:/​/​doi.org/​10.26421/​QIC18.15-16-2

[36] R. Raussendorf, J. Bermejo-Vega, E. Tyhurst, C. Okay, and M. Zurel, Phys. Rev. A 101, 012350 (2020).
https:/​/​doi.org/​10.1103/​PhysRevA.101.012350

[37] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, UK, 2001).
https:/​/​doi.org/​10.1017/​CBO9780511976667

[38] J. Yan and D. Bacon, arXiv preprint (2012), arXiv:1203.3906 [quant-ph].
arXiv:1203.3906

[39] A. F. Izmaylov, T.-C. Yen, R. A. Lang, and V. Verteletskyi, Journal of Chemical Theory and Computation 16, 190 (2020).
https:/​/​doi.org/​10.1021/​acs.jctc.9b00791

[40] A. Zhao, A. Tranter, W. M. Kirby, S. F. Ung, A. Miyake, and P. J. Love, Phys. Rev. A 101, 062322 (2020).
https:/​/​doi.org/​10.1103/​PhysRevA.101.062322

[41] P. Jordan and E. Wigner, Z. Phys. 47, 631 (1928).
https:/​/​doi.org/​10.1007/​BF01331938

[42] S. Bravyi, J. M. Gambetta, A. Mezzacapo, and K. Temme, arXiv preprint (2017), arXiv:1701.08213 [quant-ph].
arXiv:1701.08213

[43] K. Setia, R. Chen, J. E. Rice, A. Mezzacapo, M. Pistoia, and J. D. Whitfield, Journal of Chemical Theory and Computation 16, 6091 (2020).
https:/​/​doi.org/​10.1021/​acs.jctc.0c00113

[44] M. H. Yung, J. Casanova, A. Mezzacapo, J. McClean, L. Lamata, A. Aspuru-Guzik, and E. Solano, Scientific Reports 4, 3589 EP (2014).
https:/​/​doi.org/​10.1038/​srep03589

[45] J. R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, New Journal of Physics 18, 023023 (2016).
https:/​/​doi.org/​10.1088/​1367-2630/​18/​2/​023023

[46] J. Romero, R. Babbush, J. R. McClean, C. Hempel, P. J. Love, and A. Aspuru-Guzik, Quantum Science and Technology 4, 014008 (2018).
https:/​/​doi.org/​10.1088/​2058-9565/​aad3e4

[47] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Nature Communications 9, 4812 (2018).
https:/​/​doi.org/​10.1038/​s41467-018-07090-4

[48] A. Uvarov and J. Biamonte, Journal of Physics A: Mathematical and Theoretical (2021).
https:/​/​doi.org/​10.1088/​1751-8121/​abfac7

[49] M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, Nature Communications 12, 1791 (2021).
https:/​/​doi.org/​10.1038/​s41467-021-21728-w

[50] S. Wang, E. Fontana, M. Cerezo, K. Sharma, A. Sone, L. Cincio, and P. J. Coles, arXiv preprint (2020), arXiv:2007.14384 [quant-ph].
arXiv:2007.14384

[51] M. Motta, C. Sun, A. T. K. Tan, M. J. O’Rourke, E. Ye, A. J. Minnich, F. G. S. L. Brandão, and G. K.-L. Chan, Nature Physics 16, 205 (2020).
https:/​/​doi.org/​10.1038/​s41567-019-0704-4

[52] S. McArdle, T. Jones, S. Endo, Y. Li, S. C. Benjamin, and X. Yuan, npj Quantum Information 5, 75 (2019).
https:/​/​doi.org/​10.1038/​s41534-019-0187-2

[53] V. Verteletskyi, T.-C. Yen, and A. F. Izmaylov, The Journal of Chemical Physics 152, 124114 (2020).
https:/​/​doi.org/​10.1063/​1.5141458

[54] T.-C. Yen, V. Verteletskyi, and A. F. Izmaylov, Journal of Chemical Theory and Computation 16, 2400 (2020).
https:/​/​doi.org/​10.1021/​acs.jctc.0c00008

[55] P. Gokhale, O. Angiuli, Y. Ding, K. Gui, T. Tomesh, M. Suchara, M. Martonosi, and F. T. Chong, arXiv preprint (2019), arXiv:1907.13623 [quant-ph].
arXiv:1907.13623

Cited by

[1] Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S. Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, and Alán Aspuru-Guzik, “Noisy intermediate-scale quantum (NISQ) algorithms”, arXiv:2101.08448.

The above citations are from SAO/NASA ADS (last updated successfully 2021-05-14 12:59:19). The list may be incomplete as not all publishers provide suitable and complete citation data.

Could not fetch Crossref cited-by data during last attempt 2021-05-14 12:59:17: Could not fetch cited-by data for 10.22331/q-2021-05-14-456 from Crossref. This is normal if the DOI was registered recently.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://quantum-journal.org/papers/q-2021-05-14-456/

Continue Reading

Quantum

Scientists Catch Jumping Genes Rewiring Genomes

Avatar

Published

on

Roughly 500 million years ago, something that would forever change the course of eukaryotic development was brewing in the genome of some lucky organism: a gene called Pax6. The gene is thought to have orchestrated the formation of a primitive visual system, and in organisms today, it initiates a genetic cascade that recruits more than 2,000 genes to build different parts of the eye.

Pax6 is only one of thousands of genes encoding transcription factors that each have the powerful ability to amplify and silence thousands of other genes. While geneticists have made leaps in understanding how genes with relatively simple, direct functions could have evolved, explanations for transcription factors have largely eluded scientists. The problem is that the success of a transcription factor depends on how usefully it targets huge numbers of sites throughout the genome simultaneously; it’s hard to picture how natural selection enables that to happen. The answer may hold the key to understanding how complex evolutionary novelties such as eyes arise, said Cédric Feschotte, a molecular biologist at Cornell University.

For more than a decade, Feschotte has pointed to transposons as the ultimate innovators in eukaryotic genomes. Transposons are genetic elements that can copy themselves and insert those copies throughout the genome using a splicing enzyme they make. Feschotte may have finally found the smoking gun he has been looking for: As he and his colleagues recently reported in Science, these jumping genes have fused with other genes nearly 100 times in tetrapods over the past 300 million years, and many of the resulting genetic mashups are likely to encode transcription factors.

The study provides a plausible explanation for how so-called master regulators like Pax6 could have been born, said Rachel Cosby, the first author of the new study, who was a doctoral student in Feschotte’s lab and is now a postdoc at the National Institutes of Health. Although scientists had theorized that Pax6 arose from a transposon hundreds of millions of years ago, mutations since that time have obscured clues about how it formed. “We could see that it was probably derived from a transposon, but it happened so long ago that we missed the window to see how it evolved,” she said.

David Adelson, chair of bioinformatics and computational genetics at the University of Adelaide in Australia, who was not involved with the study, said, “This study provides a good mechanistic understanding of how these new genes can form, and it squarely implicates the transposon activity itself as the cause.”

Scientists have long known that transposons can fuse with established genes because they have seen the unique genetic signatures of transposons in a handful of them, but the precise mechanism behind these unlikely fusion events has largely been unknown. By analyzing genes with transposon signatures from nearly 600 tetrapods, the researchers found 106 distinct genes that may have fused with a transposon. The human genome carries 44 genes likely to have been born this way.

The structure of genes in eukaryotes is complicated, because their blueprints for making proteins are broken up by introns. These noncoding sequences are transcribed, but they get snipped out of the messenger RNA transcripts before translation into protein occurs. But according to Feschotte’s new study, a transposon can occasionally hop into an intron and change what gets translated. In some of these cases, the protein made by the fusion gene is a mashup of the original product and the transposon’s splicing enzyme (transposase).

Once the fusion protein is created, “it has a ready-made set of potential binding sites scattered all over the genome,” Adelson said, because its transposase part is still drawn to transposons. The more potential binding sites for the fusion protein, the higher the likelihood that it changes gene expression in the cell, potentially giving rise to new functions.

“These aren’t just new genes, but entire new architectures for proteins,” Feschotte said.

Cosby described the 106 fusion genes described in the study as the “tiniest tip of the iceberg.” Adelson agreed and explained why: Events that randomly create fusion genes for functional, non-harmful proteins rely on a series of coincidences and must be exceedingly rare; for the fusion genes to spread throughout a population and withstand the test of time, nature must also positively select for them in some way. For the researchers to have found the examples described in the study so readily, transposons must surely cause fusion events much more often, he said.

“All of these steps are very unlikely to happen, but this is how evolution works,” Feschotte said. “It’s very quirky, opportunistic and very unlikely in the end, yet you see it happen over and over again on the timescales of hundreds of millions of years.”

To test whether the fusion genes acted as transcription factors, Cosby and her colleagues homed in on one that evolved in bats 25 million to 45 million years ago — a blink of an eye in evolutionary time. When they used CRISPR to delete it from the bat genome, the changes were striking: The removal dysregulated hundreds of genes. As soon as they restored it, normal gene activity resumed.

To Adelson, this shows that Cosby and her co-authors practically “caught one of these fusion events in the act.” He added, “It’s especially surprising because you wouldn’t expect a new transcription factor to cause wholesale rewiring of transcriptional networks if it had been acquired relatively recently.”

Although the researchers didn’t determine the function of the other fusion proteins definitively, the genetic hallmarks of transcription factors are there: Around a third of the fusion proteins contain a part called KRAB that is associated with repressing DNA transcription in animals. Why transposases tended to fuse with KRAB-encoding genes is a mystery, Feschotte said.

Transposons comprise a hefty chunk of eukaryotic DNA, yet organisms take extreme measures to carefully regulate their activity and prevent the havoc caused by problems such as genomic instability and harmful mutations. These dangers made Adelson wonder if fusion genes sometimes endanger orderly gene regulation. “Not only are you perturbing one thing, but you’re perturbing this whole cascade of things,” he said. “How is it that you can change expression of all these things and not have a three-headed bat?” Cosby, however, thinks it’s unlikely that a fusion gene leading to harmful morphogenic changes would readily propagate through a population.

Damon Lisch, a plant geneticist at Purdue University who studies transposable elements and was not involved with the study, said he hopes this study pushes back against a widespread but misguided notion that transposons are “junk DNA.” Transposable elements generate tremendous amounts of diversity and have been implicated in the evolution of the placenta and the adaptive immune system, he explained. “These are not junk — they’re living little creatures in your genome that are under very active selection over long periods of time, and what that means is that they evolve new functions to stay in your genome,” he said.

Though this study highlights the mechanism underlying transposase fusion genes, the vast majority of new genetic material is thought to form through genetic duplication, in which genes are accidentally copied and the extras diverge through mutation. But a large quantity of genetic material does not mean that new protein functions will be significant, said Cosby, who is continuing to investigate the function of the fusion proteins.

“Evolution is the ultimate tinkerer and ultimate opportunist,” said David Schatz, a molecular geneticist at Yale University who was not involved with the study. “If you give evolution a tool, it may not use it right away, but sooner or later it will take advantage of it.”

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.quantamagazine.org/scientists-catch-jumping-genes-rewiring-genomes-20210512/

Continue Reading
AR/VR3 days ago

Next Dimension Podcast – Pico Neo 3, PSVR 2, HTC Vive Pro 2 & Vive Focus 3!

Blockchain5 days ago

IRS & DOJ Set Sights on Binance to Root Out Illicit Activity

Blockchain5 days ago

Hong Kong in Talks with China to Stretch Cross-Border Testing of Digital Yuan

Cyber Security3 days ago

Online Cybersecurity Certification Programs

Esports3 days ago

Technoblade’s Minecraft settings

Big Data5 days ago

Disney’s streaming growth slows as pandemic lift fades, shares fall

Blockchain3 days ago

Proof-of-Work Cryptocurrencies Spikes After Elon Musk Ditches Bitcoin

Esports5 days ago

Playbase offers an instant solution to organizing simple and cost-effective competitive gaming platforms

Big Data5 days ago

Elon Musk on crypto: to the mooooonnn! And back again

Blockchain News5 days ago

MicroStrategy Acquires an Additional 271 Bitcoins for $15 Million

Energy5 days ago

AlphaESS lance de nouveaux produits et programmes au salon Smart Energy Conference & Exhibition de 2021

ZDNET5 days ago

US pipeline ransomware attack serves as fair warning to persistent corporate inertia over security

North America
Esports4 days ago

Extra Salt, O PLANO secure wins in cs_summit 8 closed qualifier openers

Aviation5 days ago

The World’s Most Interesting Boeing 747 Uses

Esports5 days ago

Valorant Error Code VAN 81: How to Fix

Esports5 days ago

CS:GO Update 1.37.9.1 adds several updates to new map Ancient

AI4 days ago

Shiba Inu (SHIB) Mania, Dogecoin, Tesla’s Bitcoin Halt and Crypto Market Volatility: The Weekly Recap

Energy5 days ago

How Young Entrepreneur Jeff Clayton Is Innovating the Dropshipping Logistics Industry

AI2 days ago

Understanding dimensionality reduction in machine learning models

Energy5 days ago

Aero-Engine Coating Market to grow by USD 28.43 million|Key Drivers, Trends, and Market Forecasts|17000+ Technavio Research Reports

Trending