Connect with us


Developing a secure, un-hackable quantum network




A method of securely communicating between multiple quantum devices has been developed by a team of scientists at UCL,Oxford and Edinburgh, bringing forward the reality of a large-scale, un-hackable quantum network.

To date, communicating via quantum networks has only been possible between two devices of known provenance that have been built securely.

With the EU and UK committing €1 billion and £270 million* respectively into funding quantum technology research, a race is on to develop the first truly secure, large-scale network between cities that works for any quantum device.

“We’re in a technology arms race of sorts. When quantum computers are fully developed, they will break much of today’s encryption whose security is only based on mathematical assumptions. To pre-emptively solve this, we are working on new ways of communicating through large networks that don’t rely on assumptions, but instead use the quantum laws of physics to ensure security, which would need to be broken to hack the encryption,” explained lead author, Dr Ciarán Lee (UCL Physics & Astronomy).

Published in Physical Review Letters and funded by the Engineering and Physical Sciences Research Council, the study by UCL, the University of Oxford and the University of Edinburgh scientists details a new way of communicating securely between three or more quantum devices, irrespective of who built them.

“Our approach works for a general network where you don’t need to trust the manufacturer of the device or network for secrecy to be guaranteed. Our method works by using the network’s structure to limit what an eavesdropper can learn,” said Dr Matty Hoban (University of Oxford, previously University of Edinburgh).

The approach bridges the gap between the theoretical promise of perfect security guaranteed by the laws of quantum physics and the practical implementation of such security in large networks.

It tests the security of the quantum devices prior to engaging in communications with the whole network. It does this by checking if the correlations between devices in the network are intrinsically quantum and cannot have been created by another means.

These correlations are used to establish secret keys which can be used to encrypt any desired communication. Security is ensured by the unique property that quantum correlations can only be shared between the devices that created them, ensuring no hacker can ever come to learn the key.

The team used two methods – machine learning and causal inference – to develop the test for the un-hackable communications system. This approach distributes secret keys in a way that cannot be effectively intercepted, because through quantum mechanics their secrecy can be tested and guaranteed.

“Our work can be thought of as creating the software that will run on hardware currently being built to realise the potential of quantum communications. In future work, we’d like to work with partners in the UK national quantum technologies programme to develop this further. We hope to trial our quantum network approach over the next few years,” concluded Dr Lee.

The team acknowledge that an un-hackable network could be abused in the same way that current networks are, but highlight that there is also a clear benefit to ensuring privacy too.

This story originally appeared on the UCL website.

Read more about this story:

Read more about our work on Secure Networked Communications


Continue Reading


Can Machines Control Our Brains?




The raging bull locked its legs mid-charge. Digging its hooves into the ground, the beast came to a halt just before it would have gored the man. Not a matador, the man in the bullring standing eye-to-eye with the panting toro was the Spanish neuroscientist José Manuel Rodriguez Delgado, in a death-defying public demonstration in 1963 of how violent behavior could be squelched by a radio-controlled brain implant. Delgado had pressed a switch on a hand-held radio transmitter to energize electrodes implanted in the bull’s brain. Remote-controlled brain implants, Delgado argued, could suppress deviant behavior to achieve a “psychocivilized society.”

Unsurprisingly, the prospect of manipulating the human mind with brain implants and radio beams ignited public fears that curtailed this line of research for decades. But now there is a resurgence using even more advanced technology. Laser beams, ultrasound, electromagnetic pulses, mild alternating and direct current stimulation and other methods now allow access to, and manipulation of, electrical activity in the brain with far more sophistication than the needlelike electrodes Delgado stabbed into brains.

Billionaires Elon Musk of Tesla and Mark Zuckerberg of Facebook are leading the charge, pouring millions of dollars into developing brain-computer interface (BCI) technology. Musk says he wants to provide a “superintelligence layer” in the human brain to help protect us from artificial intelligence, and Zuckerberg reportedly wants users to upload their thoughts and emotions over the internet without the bother of typing. But fact and fiction are easily blurred in these deliberations. How does this technology actually work, and what is it capable of?

Already in 1964, Delgado’s technology could induce a surprising amount of control in human brains. Simply by energizing implanted electrodes, he could quell a raging brain storm mid-seizure, or suppress mental illnesses in an instant — but he could also command a person’s limbs to move, overwhelm a person with sexual ecstasy or plunge them into deep, suicidal despair. No wonder people got nervous about this technology.

Even recently, widely respected neuroscientists have sounded the alarm. A cautionary editorial, published in 2017 in Nature, opens with a scene that could have been found in an episode of Black Mirror, a show whose plots often center on mind control technology. The neuroscientists describe a scenario in which a brain implant that enables a paralyzed man to control a prosthetic arm suddenly goes haywire because the man feels frustrated, and it attacks an assistant with its steely claws.

I find this Frankenstein scenario ridiculous. Electrodes placed in the motor cortex to activate prosthetic limb movement do not access emotion. Moreover, no matter what you may read in sensational articles, neuroscientists do not yet understand how thoughts, emotions and intentions are coded in the pattern of neural impulses zipping through neural circuits: The biological obstacles of mind hacking are far greater than the technological challenges.

Today’s BCI devices work by analyzing data, in much the same way that Amazon tries to predict what book you might want next. Computers monitoring streams of electrical activity, picked up by a brain implant or a removable electrode cap, learn to recognize how the traffic pattern changes when a person makes an intended limb movement.

For example, the ongoing oscillations in electrical activity surging through the cerebral cortex, known as brain waves, are suddenly suppressed when a person moves a limb — or even thinks about moving it. This phenomenon reflects an abrupt change in communication among thousands of neurons, like the sudden hush in a restaurant after a server drops a glass: You cannot understand conversations between individual diners, but the collective hush is a clear signal. Scientists can use the interruption in electrical traffic in the cerebral cortex to trigger a computer to activate a motor in a prosthetic arm, or to click a virtual mouse on a computer screen. But even when it is possible to tap into an individual neuron with microelectrodes, neuroscientists can’t decode its neuronal firing as if it were so much computer code; they have to use machine learning to recognize patterns in the neuron’s electrical activity that correlate with a behavioral response. Such BCIs operate by correlation, much the way we depress the clutch in a car by listening to the sound of the engine.

And just as race car drivers shift gears with precision, this correlational approach of interfacing human and machine can be very effective. Prosthetic devices that match the brain’s electrical activity with sensorimotor function can prove life-changing, restoring some lost function and independence to people who are paralyzed or who suffer other neurological losses.

But there’s more than fancy technology at work in BCI devices — the brain itself plays a huge role. Through a prolonged trial-and-error process the brain is somehow rewarded by seeing the intended response occur, and over time it learns to generate the electrical signal it knows the computer will recognize. All of this takes place beneath the level of consciousness, and neuroscientists don’t really know how the brain accomplishes it. It’s a pretty far cry from the sensational fears and promises that accompany the specter of mind control.

For the sake of argument, however, let’s imagine that we do learn how information is encoded in neuronal firing patterns. Then, in true Black Mirror fashion, let’s say we want to insert a foreign thought via brain implant. We still have to overcome many obstacles, according to the neuroscientist Timothy Buschman, who is actively pursuing research using brain recording and stimulation. “I will know which brain region to target, but there is no way I will know which neuron,” he told me in his lab at Princeton University. “Even if I could target the same neuron in every individual, what that neuron does will be different in different individuals’ brains.”

No matter how much industrial power someone like Musk brings to the problem, Buschman explained mathematically that biology, not technology, is the real bottleneck. Even if we oversimplify neural coding by assigning a neuron to be either “on” or “off,” in a network of only 300 neurons we still have 2300 possible states — more than all the atoms in the known universe. “It is an impossible number of states,” Buschman said.

Ponder for a minute that the human brain has about 85 billion neurons.

But what about Zuckerberg’s plans of users uploading thoughts and emotions? Reading information out of the brain is more feasible than downloading information into it, after all. Indeed, Marcel Just and his colleagues at Carnegie Mellon University are now using fMRI to reveal a person’s private thoughts, in an effort to understand how the brain processes, stores and recalls information. They can tell what number a person is thinking of, what emotion they may be feeling or whether they are having thoughts of suicide. This brain-machine mentalism works by asking people to have a specific thought or cognitive experience over and over while inside an fMRI machine. Since cognition and emotion activate specific sets of networks throughout the brain, machine learning can eventually identify which constellations of brain activity patterns correlate with specific thoughts or emotions. Remarkably, the brainwide activity patterns revealing private thoughts are universal, regardless of a person’s native language.

A surprising finding from this research is that the brain does not store information the way we might think — as discrete items categorized logically in a database. Instead, information is encoded as integrated concepts that encapsulate all the sensations, emotions, relevant experiences and significance associated with an item. The words “spaghetti” and “apple” are logically similar in being food items, but each one has a different feel that activates a unique constellation of brain regions. This explains how Just can use the very slow method of fMRI, which takes many minutes to acquire brain images, to determine what sentence a person is reading. The brain does not decode and store written information word by word, the way Google Translate does: It encodes the meaning of the sentence in its entirety.

This technological mind reading might seem scary. “Nothing is more private than a thought,” Just said. But such fears are simply not grounded in fact. Similar to the BCI used to operate a prosthetic device, this mind reading requires intense cooperation and effort by the participant. People can easily defeat it, Just’s colleague Vladimir Cherkassky explained. “We need the person to think about an apple six times. So all they have to do is think about a red apple the first time, a green apple the next time, maybe a Macintosh computer, and we are done.”

Critics often cite ethical concerns with BCI: loss of privacy, identity, agency and consent. They worry about abuses to enhance performance or the destruction of free will, and they raise concerns over disparities within society that reduce access to the technology. And, yes, as with any technology it’s possible that bad actors can use it to cause deliberate harm. These are all good points, worth consideration as the technology improves. But it’s also worth remembering that we already face and accept such concerns from other biomedical advances, such as DNA sequencing, anesthesia and neurosurgery.

To me, the harm BCI might someday do is outweighed by the good it’s already doing. Current methods of treating neurological and psychological disorders with chemicals or surgery are woefully inadequate. Interfacing with the brain through the precise application of electricity and diagnosing disorders by monitoring the brain’s electrical activity shows great promise. When Nathan Copeland shook President Obama’s hand with a robotic arm controlled by electrodes implanted in his motor cortex, he also felt the grip of a handshake through sensors in the prosthetic fingers that stimulated electrodes in his sensory cortex. BCI can also restore vision and hearing, generate synthetic speech, and help treat disorders like obsessive-compulsive disorder, addiction and Parkinson’s disease.

It is natural to fear what we do not understand. For most of us, fear of mind control is an abstraction, but Copeland faced the reality of letting scientists open his skull and implant electrodes in his brain. When I met him in 2018, Copeland’s brain implants had been removed, because the electrodes have a limited lifetime.  “Looking back at it,” he said, “I would do it as many times as they would let me.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


Contextual Subspace Variational Quantum Eigensolver




William M. Kirby1, Andrew Tranter1,2, and Peter J. Love1,3

1Department of Physics and Astronomy, Tufts University, Medford, MA 02155
2Cambridge Quantum Computing, 9a Bridge Street Cambridge, CB2 1UB United Kingdom
3Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973

Find this paper interesting or want to discuss? Scite or leave a comment on SciRate.


We describe the $textit{contextual subspace variational quantum eigensolver}$ (CS-VQE), a hybrid quantum-classical algorithm for approximating the ground state energy of a Hamiltonian. The approximation to the ground state energy is obtained as the sum of two contributions. The first contribution comes from a noncontextual approximation to the Hamiltonian, and is computed classically. The second contribution is obtained by using the variational quantum eigensolver (VQE) technique to compute a contextual correction on a quantum processor. In general the VQE computation of the contextual correction uses fewer qubits and measurements than the VQE computation of the original problem. Varying the number of qubits used for the contextual correction adjusts the quality of the approximation. We simulate CS-VQE on tapered Hamiltonians for small molecules, and find that the number of qubits required to reach chemical accuracy can be reduced by more than a factor of two. The number of terms required to compute the contextual correction can be reduced by more than a factor of ten, without the use of other measurement reduction schemes. This indicates that CS-VQE is a promising approach for eigenvalue computations on noisy intermediate-scale quantum devices.

The variational quantum eigensolver (VQE) is a quantum simulation algorithm that estimates the ground state energy of a system, given its Hamiltonian. The quantum computer is used to prepare a guess or “ansatz” for the ground state, and to evaluate its energy. A classical computer is then used to vary the ansatz, and this whole process is repeated, ideally until the energy approaches its global minimum, the ground state energy.
Contextuality is a feature of quantum mechanics that does not appear in classical physics. A system is contextual when one cannot model its observables as having preexisting values before measurement. Applied to VQE, contextuality is a property that the set of measurements involved in evaluating energies may or may not possess. When the set of measurements is noncontextual, it can be described by a classical statistical model, but when it is contextual, such models are generally ruled out.
In this work, we showed how to take a VQE instance and partition it into a noncontextual part and a remaining part that in general is contextual. The noncontextual part can be simulated classically, and the contextual part, which we can think of as encoding the “intrinsically quantum part” of the original problem, is simulated using VQE. We call this algorithm contextual subspace VQE or CS-VQE, and it is an example of a genuinely hybrid quantum-classical algorithm where part of the solution is obtained using a classical computer and part is obtained using a quantum computer.
Since the contextual part is only a subset of the original problem, the VQE algorithm it requires uses fewer qubits and measurements than the original problem, in general. We can vary the size of the contextual part to trade off use of more qubits and measurements for better accuracy in the overall approximation. We tested this for electronic structure Hamiltonians of various atoms and small molecules: in some cases we reached useful accuracy using fewer than half as many qubits as standard VQE, and in nearly all cases at least one qubit was saved. In summary, by using contextuality to isolate the “intrinsically quantum part” of a VQE instance, we can save quantum resources while still taking advantage of those that are available on our quantum computer.

► BibTeX data

► References

[1] A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, Nature Communications 5, 4213 EP (2014).

[2] P. J. J. O’Malley, R. Babbush, I. D. Kivlichan, J. Romero, J. R. McClean, R. Barends, J. Kelly, P. Roushan, A. Tranter, N. Ding, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. G. Fowler, E. Jeffrey, E. Lucero, A. Megrant, J. Y. Mutus, M. Neeley, C. Neill, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, P. V. Coveney, P. J. Love, H. Neven, A. Aspuru-Guzik, and J. M. Martinis, Phys. Rev. X 6, 031007 (2016).

[3] R. Santagati, J. Wang, A. A. Gentile, S. Paesani, N. Wiebe, J. R. McClean, S. Morley-Short, P. J. Shadbolt, D. Bonneau, J. W. Silverstone, D. P. Tew, X. Zhou, J. L. O’Brien, and M. G. Thompson, Science Advances 4 (2018).

[4] Y. Shen, X. Zhang, S. Zhang, J.-N. Zhang, M.-H. Yung, and K. Kim, Physical Review A 95, 020501 (2017).

[5] S. Paesani, A. A. Gentile, R. Santagati, J. Wang, N. Wiebe, D. P. Tew, J. L. O’Brien, and M. G. Thompson, Phys. Rev. Lett. 118, 100503 (2017).

[6] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Nature 549, 242 (2017).

[7] C. Hempel, C. Maier, J. Romero, J. McClean, T. Monz, H. Shen, P. Jurcevic, B. P. Lanyon, P. Love, R. Babbush, A. Aspuru-Guzik, R. Blatt, and C. F. Roos, Phys. Rev. X 8, 031022 (2018).

[8] E. F. Dumitrescu, A. J. McCaskey, G. Hagen, G. R. Jansen, T. D. Morris, T. Papenbrock, R. C. Pooser, D. J. Dean, and P. Lougovski, Phys. Rev. Lett. 120, 210501 (2018).

[9] J. I. Colless, V. V. Ramasesh, D. Dahlen, M. S. Blok, M. E. Kimchi-Schwartz, J. R. McClean, J. Carter, W. A. de Jong, and I. Siddiqi, Phys. Rev. X 8, 011021 (2018).

[10] Y. Nam, J.-S. Chen, N. C. Pisenti, K. Wright, C. Delaney, D. Maslov, K. R. Brown, S. Allen, J. M. Amini, J. Apisdorf, K. M. Beck, A. Blinov, V. Chaplin, M. Chmielewski, C. Collins, S. Debnath, K. M. Hudek, A. M. Ducore, M. Keesan, S. M. Kreikemeier, J. Mizrahi, P. Solomon, M. Williams, J. D. Wong-Campos, D. Moehring, C. Monroe, and J. Kim, npj Quantum Information 6, 33 (2020).

[11] C. Kokail, C. Maier, R. van Bijnen, T. Brydges, M. K. Joshi, P. Jurcevic, C. A. Muschik, P. Silvi, R. Blatt, C. F. Roos, and P. Zoller, Nature 569, 355 (2019).

[12] A. Kandala, K. Temme, A. D. Córcoles, A. Mezzacapo, J. M. Chow, and J. M. Gambetta, Nature 567, 491 (2019).

[13] Google AI Quantum and Collaborators, Science 369, 1084 (2020).

[14] W. M. Kirby and P. J. Love, Phys. Rev. Lett. 123, 200501 (2019).

[15] W. M. Kirby and P. J. Love, Phys. Rev. A 102, 032418 (2020).

[16] R. W. Spekkens, Phys. Rev. A 75, 032110 (2007).

[17] R. W. Spekkens, “Quasi-quantization: Classical statistical theories with an epistemic restriction,” in Quantum Theory: Informational Foundations and Foils, edited by G. Chiribella and R. W. Spekkens (Springer Netherlands, Dordrecht, 2016) pp. 83–135.

[18] K. M. Nakanishi, K. Mitarai, and K. Fujii, Phys. Rev. Research 1, 033062 (2019).

[19] J. S. Bell, Physics 1, 195 (1964).

[20] J. S. Bell, Rev. Mod. Phys. 38, 447 (1966).

[21] S. Kochen and E. Specker, J. Math. Mech. 17, 59 (1967).

[22] R. W. Spekkens, Phys. Rev. A 71, 052108 (2005).

[23] S. Abramsky and A. Brandenburger, New Journal of Physics 13, 113036 (2011).

[24] R. Raussendorf, Phys. Rev. A 88, 022322 (2013).

[25] M. Howard, J. Wallman, V. Veitch, and J. Emerson, Nature 510, 351 EP (2014).

[26] A. Cabello, S. Severini, and A. Winter, Phys. Rev. Lett. 112, 040401 (2014).

[27] A. Cabello, M. Kleinmann, and C. Budroni, Phys. Rev. Lett. 114, 250402 (2015).

[28] R. Ramanathan and P. Horodecki, Phys. Rev. Lett. 112, 040404 (2014).

[29] N. de Silva, Phys. Rev. A 95, 032108 (2017).

[30] B. Amaral and M. T. Cunha, arXiv preprint (2017), arXiv:1709.04812 [quant-ph].

[31] Z.-P. Xu and A. Cabello, Phys. Rev. A 99, 020103 (2019).

[32] R. Raussendorf, Quantum Information and Computation 19, 1141 (2019).

[33] C. Duarte and B. Amaral, Journal of Mathematical Physics 59, 062202 (2018).

[34] S. Mansfield and E. Kashefi, Phys. Rev. Lett. 121, 230401 (2018).

[35] C. Okay, E. Tyhurst, and R. Raussendorf, Quantum Information and Computation 18, 1272 (2018).

[36] R. Raussendorf, J. Bermejo-Vega, E. Tyhurst, C. Okay, and M. Zurel, Phys. Rev. A 101, 012350 (2020).

[37] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, UK, 2001).

[38] J. Yan and D. Bacon, arXiv preprint (2012), arXiv:1203.3906 [quant-ph].

[39] A. F. Izmaylov, T.-C. Yen, R. A. Lang, and V. Verteletskyi, Journal of Chemical Theory and Computation 16, 190 (2020).

[40] A. Zhao, A. Tranter, W. M. Kirby, S. F. Ung, A. Miyake, and P. J. Love, Phys. Rev. A 101, 062322 (2020).

[41] P. Jordan and E. Wigner, Z. Phys. 47, 631 (1928).

[42] S. Bravyi, J. M. Gambetta, A. Mezzacapo, and K. Temme, arXiv preprint (2017), arXiv:1701.08213 [quant-ph].

[43] K. Setia, R. Chen, J. E. Rice, A. Mezzacapo, M. Pistoia, and J. D. Whitfield, Journal of Chemical Theory and Computation 16, 6091 (2020).

[44] M. H. Yung, J. Casanova, A. Mezzacapo, J. McClean, L. Lamata, A. Aspuru-Guzik, and E. Solano, Scientific Reports 4, 3589 EP (2014).

[45] J. R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, New Journal of Physics 18, 023023 (2016).

[46] J. Romero, R. Babbush, J. R. McClean, C. Hempel, P. J. Love, and A. Aspuru-Guzik, Quantum Science and Technology 4, 014008 (2018).

[47] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Nature Communications 9, 4812 (2018).

[48] A. Uvarov and J. Biamonte, Journal of Physics A: Mathematical and Theoretical (2021).

[49] M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, Nature Communications 12, 1791 (2021).

[50] S. Wang, E. Fontana, M. Cerezo, K. Sharma, A. Sone, L. Cincio, and P. J. Coles, arXiv preprint (2020), arXiv:2007.14384 [quant-ph].

[51] M. Motta, C. Sun, A. T. K. Tan, M. J. O’Rourke, E. Ye, A. J. Minnich, F. G. S. L. Brandão, and G. K.-L. Chan, Nature Physics 16, 205 (2020).

[52] S. McArdle, T. Jones, S. Endo, Y. Li, S. C. Benjamin, and X. Yuan, npj Quantum Information 5, 75 (2019).

[53] V. Verteletskyi, T.-C. Yen, and A. F. Izmaylov, The Journal of Chemical Physics 152, 124114 (2020).

[54] T.-C. Yen, V. Verteletskyi, and A. F. Izmaylov, Journal of Chemical Theory and Computation 16, 2400 (2020).

[55] P. Gokhale, O. Angiuli, Y. Ding, K. Gui, T. Tomesh, M. Suchara, M. Martonosi, and F. T. Chong, arXiv preprint (2019), arXiv:1907.13623 [quant-ph].

Cited by

[1] Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S. Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, and Alán Aspuru-Guzik, “Noisy intermediate-scale quantum (NISQ) algorithms”, arXiv:2101.08448.

The above citations are from SAO/NASA ADS (last updated successfully 2021-05-14 12:59:19). The list may be incomplete as not all publishers provide suitable and complete citation data.

Could not fetch Crossref cited-by data during last attempt 2021-05-14 12:59:17: Could not fetch cited-by data for 10.22331/q-2021-05-14-456 from Crossref. This is normal if the DOI was registered recently.

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


Scientists Catch Jumping Genes Rewiring Genomes




Roughly 500 million years ago, something that would forever change the course of eukaryotic development was brewing in the genome of some lucky organism: a gene called Pax6. The gene is thought to have orchestrated the formation of a primitive visual system, and in organisms today, it initiates a genetic cascade that recruits more than 2,000 genes to build different parts of the eye.

Pax6 is only one of thousands of genes encoding transcription factors that each have the powerful ability to amplify and silence thousands of other genes. While geneticists have made leaps in understanding how genes with relatively simple, direct functions could have evolved, explanations for transcription factors have largely eluded scientists. The problem is that the success of a transcription factor depends on how usefully it targets huge numbers of sites throughout the genome simultaneously; it’s hard to picture how natural selection enables that to happen. The answer may hold the key to understanding how complex evolutionary novelties such as eyes arise, said Cédric Feschotte, a molecular biologist at Cornell University.

For more than a decade, Feschotte has pointed to transposons as the ultimate innovators in eukaryotic genomes. Transposons are genetic elements that can copy themselves and insert those copies throughout the genome using a splicing enzyme they make. Feschotte may have finally found the smoking gun he has been looking for: As he and his colleagues recently reported in Science, these jumping genes have fused with other genes nearly 100 times in tetrapods over the past 300 million years, and many of the resulting genetic mashups are likely to encode transcription factors.

The study provides a plausible explanation for how so-called master regulators like Pax6 could have been born, said Rachel Cosby, the first author of the new study, who was a doctoral student in Feschotte’s lab and is now a postdoc at the National Institutes of Health. Although scientists had theorized that Pax6 arose from a transposon hundreds of millions of years ago, mutations since that time have obscured clues about how it formed. “We could see that it was probably derived from a transposon, but it happened so long ago that we missed the window to see how it evolved,” she said.

David Adelson, chair of bioinformatics and computational genetics at the University of Adelaide in Australia, who was not involved with the study, said, “This study provides a good mechanistic understanding of how these new genes can form, and it squarely implicates the transposon activity itself as the cause.”

Scientists have long known that transposons can fuse with established genes because they have seen the unique genetic signatures of transposons in a handful of them, but the precise mechanism behind these unlikely fusion events has largely been unknown. By analyzing genes with transposon signatures from nearly 600 tetrapods, the researchers found 106 distinct genes that may have fused with a transposon. The human genome carries 44 genes likely to have been born this way.

The structure of genes in eukaryotes is complicated, because their blueprints for making proteins are broken up by introns. These noncoding sequences are transcribed, but they get snipped out of the messenger RNA transcripts before translation into protein occurs. But according to Feschotte’s new study, a transposon can occasionally hop into an intron and change what gets translated. In some of these cases, the protein made by the fusion gene is a mashup of the original product and the transposon’s splicing enzyme (transposase).

Once the fusion protein is created, “it has a ready-made set of potential binding sites scattered all over the genome,” Adelson said, because its transposase part is still drawn to transposons. The more potential binding sites for the fusion protein, the higher the likelihood that it changes gene expression in the cell, potentially giving rise to new functions.

“These aren’t just new genes, but entire new architectures for proteins,” Feschotte said.

Cosby described the 106 fusion genes described in the study as the “tiniest tip of the iceberg.” Adelson agreed and explained why: Events that randomly create fusion genes for functional, non-harmful proteins rely on a series of coincidences and must be exceedingly rare; for the fusion genes to spread throughout a population and withstand the test of time, nature must also positively select for them in some way. For the researchers to have found the examples described in the study so readily, transposons must surely cause fusion events much more often, he said.

“All of these steps are very unlikely to happen, but this is how evolution works,” Feschotte said. “It’s very quirky, opportunistic and very unlikely in the end, yet you see it happen over and over again on the timescales of hundreds of millions of years.”

To test whether the fusion genes acted as transcription factors, Cosby and her colleagues homed in on one that evolved in bats 25 million to 45 million years ago — a blink of an eye in evolutionary time. When they used CRISPR to delete it from the bat genome, the changes were striking: The removal dysregulated hundreds of genes. As soon as they restored it, normal gene activity resumed.

To Adelson, this shows that Cosby and her co-authors practically “caught one of these fusion events in the act.” He added, “It’s especially surprising because you wouldn’t expect a new transcription factor to cause wholesale rewiring of transcriptional networks if it had been acquired relatively recently.”

Although the researchers didn’t determine the function of the other fusion proteins definitively, the genetic hallmarks of transcription factors are there: Around a third of the fusion proteins contain a part called KRAB that is associated with repressing DNA transcription in animals. Why transposases tended to fuse with KRAB-encoding genes is a mystery, Feschotte said.

Transposons comprise a hefty chunk of eukaryotic DNA, yet organisms take extreme measures to carefully regulate their activity and prevent the havoc caused by problems such as genomic instability and harmful mutations. These dangers made Adelson wonder if fusion genes sometimes endanger orderly gene regulation. “Not only are you perturbing one thing, but you’re perturbing this whole cascade of things,” he said. “How is it that you can change expression of all these things and not have a three-headed bat?” Cosby, however, thinks it’s unlikely that a fusion gene leading to harmful morphogenic changes would readily propagate through a population.

Damon Lisch, a plant geneticist at Purdue University who studies transposable elements and was not involved with the study, said he hopes this study pushes back against a widespread but misguided notion that transposons are “junk DNA.” Transposable elements generate tremendous amounts of diversity and have been implicated in the evolution of the placenta and the adaptive immune system, he explained. “These are not junk — they’re living little creatures in your genome that are under very active selection over long periods of time, and what that means is that they evolve new functions to stay in your genome,” he said.

Though this study highlights the mechanism underlying transposase fusion genes, the vast majority of new genetic material is thought to form through genetic duplication, in which genes are accidentally copied and the extras diverge through mutation. But a large quantity of genetic material does not mean that new protein functions will be significant, said Cosby, who is continuing to investigate the function of the fusion proteins.

“Evolution is the ultimate tinkerer and ultimate opportunist,” said David Schatz, a molecular geneticist at Yale University who was not involved with the study. “If you give evolution a tool, it may not use it right away, but sooner or later it will take advantage of it.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


New Black Hole Math Closes Cosmic Blind Spot




Last year, just for the heck of it, Scott Field and Gaurav Khanna tried something that wasn’t supposed to work. The fact that it actually worked quite well is already starting to make some ripples.

Field and Khanna are researchers who try to figure out what black hole collisions should look like. These violent events don’t produce flashes of light, but rather the faint vibrations of gravitational waves, the quivers of space-time itself. But observing them is not as simple as sitting back and waiting for space to ring like a bell. To pick out such signals, researchers must constantly compare the data from gravitational wave detectors to the output of various mathematical models — calculations that reveal the potential signatures of a black hole collision. Without reliable models, astronomers wouldn’t have a clue what to look for.

The trouble is, the most trustworthy models come from Einstein’s general theory of relativity, which is described by 10 interlinked equations that are notoriously difficult to solve. To chronicle the complex interactions between colliding black holes, you can’t just use a pen and paper. The first so-called numerical relativity solutions to the Einstein equations for the case of a black hole merger were calculated only in 2005 — after decades of attempts. They required a supercomputer running on and off for two months.

A gravitational wave observatory like LIGO needs to have a large number of solutions to draw upon. In a perfect world, physicists could just run their model for every possible merger permutation — a black hole with a certain mass and spin encountering another with a different mass and spin — and compare those results with what the detector sees. But the calculations take a long time. “If you give me a big enough computer and enough time, you can model almost anything,” said Scott Hughes, a physicist at the Massachusetts Institute of Technology. “But there’s a practical issue. The amount of computer time is really exorbitant” — weeks or months on a supercomputer. And if those black holes are unevenly sized? The calculations would take so long that researchers consider the task practically impossible. Because of that, physicists are effectively unable to spot collisions between black holes with mass ratios greater than 10-to-1.

Which is one reason why Field and Khanna’s new work is so exciting. Field, a mathematician at the University of Massachusetts, Dartmouth, and Khanna, a physicist at the University of Rhode Island, have made an assumption that simplifies matters greatly: They treat the smaller black hole as a “point particle” — a speck of dust, an object with mass but zero radius and no event horizon.

“It’s like two ships passing in the ocean — one a rowboat, the other a cruise liner,” Field explained. “You wouldn’t expect the rowboat to affect the cruise liner’s trajectory in any way. We’re saying the small ship, the rowboat, can be completely ignored in this transaction.”

They expected it to work when the smaller black hole’s mass really was like a rowboat’s compared to a cruise ship’s. “If the mass ratio is on the order of 10,000-to-1, we feel very confident in making that approximation,” Khanna said.

But in research published last year, he and Field, along with graduate student Nur Rifat and Cornell physicist Vijay Varma, decided to test their model at mass ratios all the way down to 3-to-1 — a ratio so low it had never been tried, mainly because no one considered it worth trying. They found that even at this low extreme, their model agreed, to within about 1%, with results obtained by solving the full set of Einstein’s equations — an astounding level of accuracy.

“That’s when I really started to pay attention,” said Hughes. Their results at mass ratio 3, he added, were “pretty incredible.”

“It’s an important result,” said Niels Warburton, a physicist at University College Dublin who was not involved with the research.

The success of Field and Khanna’s model down to ratios of 3-to-1 gives researchers that much more confidence in using it at ratios of 10-to-1 and above. The hope is that this model, or one like it, could operate in regimes where numerical relativity cannot, allowing researchers to scrutinize a part of the universe that has been largely impenetrable.

How to Find a Black Hole

After black holes spiral toward each other and collide, the massive bodies create space-time-contorting disturbances — gravitational waves — that propagate through the universe. Eventually, some of these gravitational waves might reach Earth, where the LIGO and Virgo observatories wait. These enormous L-shaped detectors can sense the truly tiny stretching or squishing of space-time that these waves create — a shift 10,000 times smaller than the width of a proton.

The designers of these observatories have made herculean efforts to muffle stray noise, but when your signal is so weak, noise is a constant companion.

The first task in any gravitational wave detection is to try to extract a weak signal from that noise. Field compares the process to “driving in a car with a loud muffler and a lot of static on the radio, while thinking there might be a song, a faint melody, somewhere in that noisy background.”

Astronomers take the incoming stream of data and first ask if any of it is consistent with a previously modeled gravitational wave form. They might run this preliminary comparison against tens of thousands of signals stored in their “template bank.” Researchers can’t determine the exact black hole characteristics from this procedure. They’re just trying to figure out if there’s a song on the radio.

The next step is analogous to identifying the song and determining who sang it and what instruments are playing. Researchers run tens of millions of simulations to compare the observed signal, or wave form, with those produced by black holes of differing masses and spins. This is where researchers can really nail down the details. The frequency of the gravitational wave tells you the total mass of the system. How that frequency changes over time reveals the mass ratio, and thus the masses of the individual black holes. The rate of change in the frequency also provides information about a black hole’s spin. Finally, the amplitude (or height) of the detected wave can reveal how far the system is from our telescopes on Earth.

If you have to do tens of millions of simulations, they’d better be quick. “To complete that in a day, you need to do each in about a millisecond,” said Rory Smith, an astronomer at Monash University and a member of the LIGO collaboration. Yet the time needed to run a single numerical relativity simulation — one that faithfully grinds its way through the Einstein equations — is measured in days, weeks or even months.

To speed up this process, researchers typically start with the results of full supercomputer simulations — of which several thousand have been carried out so far. They then use machine learning strategies to interpolate their data, Smith said, “filling in the gaps and mapping out the full space of possible simulations.”

This “surrogate modeling” approach works well so long as the interpolated data doesn’t stray too far from the baseline simulations. But simulations for collisions with a high mass ratio are incredibly difficult. “The bigger the mass ratio, the more slowly the system of two inspiraling black holes takes to evolve,” Warburton explained. For a typical low-mass-ratio computation, you need to look at 20 to 40 orbits before the black holes plunge together, he said. “For a mass ratio of 1,000, you need to look at 1,000 orbits, and that would just take too long” — on the order of years. This makes the task virtually “impossible, even if you have a supercomputer at your disposal,” Field said. “And without a revolutionary breakthrough, this won’t be possible in the near future either.”

Because of this, many of the full simulations used in surrogate modeling are between the mass ratios of 1 and 4; almost all are less than 10.  When LIGO and Virgo detected a merger with a mass ratio of 9 in 2019, it was right at the limit of their sensitivity. More events like this haven’t been found, Khanna explained, because “we don’t have reliable models from supercomputers for mass ratios above 10. We haven’t been looking because we don’t have the templates.”

That’s where the model that he and Khanna have developed comes in. They started with their own point particle approximation model, which is specially designed to operate in the mass ratio range above 10. They then trained a surrogate model on it.  The work opens up opportunities to detect the mergers of unevenly sized black holes.

What kinds of situations might create such mergers? Researchers aren’t sure, since this is a newly opening frontier of the universe. But there are a few possibilities.

First, astronomers can imagine an intermediate-mass black hole of perhaps 80 or 100 solar masses colliding with a smaller, stellar-size black hole of about 5 solar masses.

Another possibility would involve a collision between a garden-variety stellar black hole and a relatively puny black hole left over from the Big Bang — a “primordial” black hole. These could have as little as 1% of a solar mass, whereas the vast majority of black holes detected by LIGO so far weigh more than 10 solar masses.

Earlier this year, researchers at the Max Planck Institute for Gravitational Physics used Field and Khanna’s surrogate model to look through LIGO data for signs of gravitational waves emanating from mergers involving primordial black holes. And while they didn’t find any, they were able to place more precise limits on the possible abundance of this hypothetical class of black holes.

Furthermore, LISA, a planned space-based gravitational wave observatory, might one day be able to witness mergers between ordinary black holes and the supermassive varieties at the centers of galaxies — some with the mass of a billion or more suns. LISA’s future is uncertain; its earliest launch date is 2035, and its funding situation is still unclear. But if and when it does launch, we may see mergers at mass ratios above 1 million.

The Breaking Point

Some in the field, including Hughes, have described the new model’s success as “the unreasonable effectiveness of point particle approximations,” underscoring the fact that the model’s effectiveness at low mass ratios poses a genuine mystery. Why should researchers be able to ignore the critical details of the smaller black hole and still arrive at the right answer?

“It’s telling us something about the underlying physics,” Khanna said, though exactly what that is remains a source of curiosity. “We don’t have to concern ourselves with two objects surrounded by event horizons that can get distorted and interact with each other in strange ways.” But no one knows why.

In the absence of answers, Field and Khanna are trying to extend their model to more realistic situations. In a paper scheduled to be posted early this summer on the preprint server, the researchers give the larger black hole some spin, which is expected in an astrophysically realistic situation. Again, their model closely matches the findings of numerical relativity simulations at mass ratios down to 3.

They next plan to consider black holes that approach each other on elliptical rather than perfectly circular orbits. They’re also planning, in concert with Hughes, to introduce the notion of “misaligned orbits” — cases in which the black holes are askew relative to each other, orbiting in different geometric planes.

Finally, they’re hoping to learn from their model by trying to make it break. Could it work at a mass ratio of 2 or lower? Field and Khanna want to find out. “One gains confidence in an approximation method when one sees it fail,” said Richard Price, a physicist at MIT. “When you do an approximation that gets surprisingly good results, you wonder if you are somehow cheating, unconsciously using a result that you shouldn’t have access to.” If Field and Khanna push their model to the breaking point, he added, “then you’d really know that what you are doing is not cheating — that you just have an approximation that works better than you’d expect.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading
AR/VR3 days ago

Next Dimension Podcast – Pico Neo 3, PSVR 2, HTC Vive Pro 2 & Vive Focus 3!

Blockchain5 days ago

IRS & DOJ Set Sights on Binance to Root Out Illicit Activity

Blockchain5 days ago

Hong Kong in Talks with China to Stretch Cross-Border Testing of Digital Yuan

Cyber Security3 days ago

Online Cybersecurity Certification Programs

Esports3 days ago

Technoblade’s Minecraft settings

Big Data5 days ago

Disney’s streaming growth slows as pandemic lift fades, shares fall

Blockchain3 days ago

Proof-of-Work Cryptocurrencies Spikes After Elon Musk Ditches Bitcoin

Big Data5 days ago

Elon Musk on crypto: to the mooooonnn! And back again

Blockchain News5 days ago

MicroStrategy Acquires an Additional 271 Bitcoins for $15 Million

Esports5 days ago

Playbase offers an instant solution to organizing simple and cost-effective competitive gaming platforms

Energy5 days ago

AlphaESS lance de nouveaux produits et programmes au salon Smart Energy Conference & Exhibition de 2021

ZDNET5 days ago

US pipeline ransomware attack serves as fair warning to persistent corporate inertia over security

North America
Esports4 days ago

Extra Salt, O PLANO secure wins in cs_summit 8 closed qualifier openers

Aviation5 days ago

The World’s Most Interesting Boeing 747 Uses

Esports5 days ago

Valorant Error Code VAN 81: How to Fix

Esports5 days ago

CS:GO Update adds several updates to new map Ancient

AI4 days ago

Shiba Inu (SHIB) Mania, Dogecoin, Tesla’s Bitcoin Halt and Crypto Market Volatility: The Weekly Recap

Energy5 days ago

How Young Entrepreneur Jeff Clayton Is Innovating the Dropshipping Logistics Industry

AI2 days ago

Understanding dimensionality reduction in machine learning models

Energy5 days ago

Aero-Engine Coating Market to grow by USD 28.43 million|Key Drivers, Trends, and Market Forecasts|17000+ Technavio Research Reports