Connect with us

Quantum

Cambridge Quantum Computing working in partnership with NQIT in crucial UK Quantum Readiness Programme

Avatar

Published

on

Cambridge Quantum Computing working in partnership with NQIT in crucial UK Quantum Readiness Programme

NQIT in partnership with Cambridge Quantum Computing (CQC) are pleased to announce the launch of the UK Quantum Readiness Programme (QRP). The National QRP has been designed to provide UK corporations and organisations with the knowledge required to understand the opportunities presented by quantum computing and associated quantum technologies. The principal strategic aim of the programme is to build on UK expertise and continue the UK’s global leadership in the quantum domain.  

NQIT believes that the QRP will promote further activity in the quantum sector by educating future end-users, and, will provide them with an early insight into the commercial opportunities arising as a result of quantum computing and quantum technologies.

Comprised of a series of seminars and detailed ‘tech-talks’ delivered by technical specialists from NQIT and CQC, the initial phase of the QRP will be aimed at large UK organisations and corporates and those international corporations and organisations with a significant base in the UK, and, crucially will be delivered free of cost to participating companies and organisations at a venue of their choice in order to minimise logistical costs. Further details of the programme will be provided in due course however it should be noted that the overall objectives of the QRP is to ensure that as many of the businesses and large governmental organisations that are based in the UK as possible are provided with the means to start to answer the critical question “how will quantum computing and quantum technologies affect our strategic and business plans and how do we prepare adequately to benefit from the new paradigm in computing represented by quantum computers”.

To download our guide and find out how to contact the QRP team, please see our Quantum Readiness Programme page.

Quantum Readiness Programme brochure cover

Notes

Cambridge Quantum Computing (CQC) is an independent company combining expertise in Quantum Information Processing, Artificial Intelligence, Optimisation and Pattern Recognition. Visit CQC’s website to learn more about their work in these areas.

Source: https://www.nqit.ox.ac.uk/news/cambridge-quantum-computing-working-partnership-nqit-crucial-uk-quantum-readiness-programme

Continue Reading

Quantum

Contextual Subspace Variational Quantum Eigensolver

Avatar

Published

on

William M. Kirby1, Andrew Tranter1,2, and Peter J. Love1,3

1Department of Physics and Astronomy, Tufts University, Medford, MA 02155
2Cambridge Quantum Computing, 9a Bridge Street Cambridge, CB2 1UB United Kingdom
3Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973

Find this paper interesting or want to discuss? Scite or leave a comment on SciRate.

Abstract

We describe the $textit{contextual subspace variational quantum eigensolver}$ (CS-VQE), a hybrid quantum-classical algorithm for approximating the ground state energy of a Hamiltonian. The approximation to the ground state energy is obtained as the sum of two contributions. The first contribution comes from a noncontextual approximation to the Hamiltonian, and is computed classically. The second contribution is obtained by using the variational quantum eigensolver (VQE) technique to compute a contextual correction on a quantum processor. In general the VQE computation of the contextual correction uses fewer qubits and measurements than the VQE computation of the original problem. Varying the number of qubits used for the contextual correction adjusts the quality of the approximation. We simulate CS-VQE on tapered Hamiltonians for small molecules, and find that the number of qubits required to reach chemical accuracy can be reduced by more than a factor of two. The number of terms required to compute the contextual correction can be reduced by more than a factor of ten, without the use of other measurement reduction schemes. This indicates that CS-VQE is a promising approach for eigenvalue computations on noisy intermediate-scale quantum devices.

The variational quantum eigensolver (VQE) is a quantum simulation algorithm that estimates the ground state energy of a system, given its Hamiltonian. The quantum computer is used to prepare a guess or “ansatz” for the ground state, and to evaluate its energy. A classical computer is then used to vary the ansatz, and this whole process is repeated, ideally until the energy approaches its global minimum, the ground state energy.
Contextuality is a feature of quantum mechanics that does not appear in classical physics. A system is contextual when one cannot model its observables as having preexisting values before measurement. Applied to VQE, contextuality is a property that the set of measurements involved in evaluating energies may or may not possess. When the set of measurements is noncontextual, it can be described by a classical statistical model, but when it is contextual, such models are generally ruled out.
In this work, we showed how to take a VQE instance and partition it into a noncontextual part and a remaining part that in general is contextual. The noncontextual part can be simulated classically, and the contextual part, which we can think of as encoding the “intrinsically quantum part” of the original problem, is simulated using VQE. We call this algorithm contextual subspace VQE or CS-VQE, and it is an example of a genuinely hybrid quantum-classical algorithm where part of the solution is obtained using a classical computer and part is obtained using a quantum computer.
Since the contextual part is only a subset of the original problem, the VQE algorithm it requires uses fewer qubits and measurements than the original problem, in general. We can vary the size of the contextual part to trade off use of more qubits and measurements for better accuracy in the overall approximation. We tested this for electronic structure Hamiltonians of various atoms and small molecules: in some cases we reached useful accuracy using fewer than half as many qubits as standard VQE, and in nearly all cases at least one qubit was saved. In summary, by using contextuality to isolate the “intrinsically quantum part” of a VQE instance, we can save quantum resources while still taking advantage of those that are available on our quantum computer.

► BibTeX data

► References

[1] A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, Nature Communications 5, 4213 EP (2014).
https:/​/​doi.org/​10.1038/​ncomms5213

[2] P. J. J. O’Malley, R. Babbush, I. D. Kivlichan, J. Romero, J. R. McClean, R. Barends, J. Kelly, P. Roushan, A. Tranter, N. Ding, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. G. Fowler, E. Jeffrey, E. Lucero, A. Megrant, J. Y. Mutus, M. Neeley, C. Neill, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, P. V. Coveney, P. J. Love, H. Neven, A. Aspuru-Guzik, and J. M. Martinis, Phys. Rev. X 6, 031007 (2016).
https:/​/​doi.org/​10.1103/​PhysRevX.6.031007

[3] R. Santagati, J. Wang, A. A. Gentile, S. Paesani, N. Wiebe, J. R. McClean, S. Morley-Short, P. J. Shadbolt, D. Bonneau, J. W. Silverstone, D. P. Tew, X. Zhou, J. L. O’Brien, and M. G. Thompson, Science Advances 4 (2018).
https:/​/​doi.org/​10.1126/​sciadv.aap9646

[4] Y. Shen, X. Zhang, S. Zhang, J.-N. Zhang, M.-H. Yung, and K. Kim, Physical Review A 95, 020501 (2017).
https:/​/​doi.org/​10.1103/​PhysRevA.95.020501

[5] S. Paesani, A. A. Gentile, R. Santagati, J. Wang, N. Wiebe, D. P. Tew, J. L. O’Brien, and M. G. Thompson, Phys. Rev. Lett. 118, 100503 (2017).
https:/​/​doi.org/​10.1103/​PhysRevLett.118.100503

[6] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Nature 549, 242 (2017).
https:/​/​doi.org/​10.1038/​nature23879

[7] C. Hempel, C. Maier, J. Romero, J. McClean, T. Monz, H. Shen, P. Jurcevic, B. P. Lanyon, P. Love, R. Babbush, A. Aspuru-Guzik, R. Blatt, and C. F. Roos, Phys. Rev. X 8, 031022 (2018).
https:/​/​doi.org/​10.1103/​PhysRevX.8.031022

[8] E. F. Dumitrescu, A. J. McCaskey, G. Hagen, G. R. Jansen, T. D. Morris, T. Papenbrock, R. C. Pooser, D. J. Dean, and P. Lougovski, Phys. Rev. Lett. 120, 210501 (2018).
https:/​/​doi.org/​10.1103/​PhysRevLett.120.210501

[9] J. I. Colless, V. V. Ramasesh, D. Dahlen, M. S. Blok, M. E. Kimchi-Schwartz, J. R. McClean, J. Carter, W. A. de Jong, and I. Siddiqi, Phys. Rev. X 8, 011021 (2018).
https:/​/​doi.org/​10.1103/​PhysRevX.8.011021

[10] Y. Nam, J.-S. Chen, N. C. Pisenti, K. Wright, C. Delaney, D. Maslov, K. R. Brown, S. Allen, J. M. Amini, J. Apisdorf, K. M. Beck, A. Blinov, V. Chaplin, M. Chmielewski, C. Collins, S. Debnath, K. M. Hudek, A. M. Ducore, M. Keesan, S. M. Kreikemeier, J. Mizrahi, P. Solomon, M. Williams, J. D. Wong-Campos, D. Moehring, C. Monroe, and J. Kim, npj Quantum Information 6, 33 (2020).
https:/​/​doi.org/​10.1038/​s41534-020-0259-3

[11] C. Kokail, C. Maier, R. van Bijnen, T. Brydges, M. K. Joshi, P. Jurcevic, C. A. Muschik, P. Silvi, R. Blatt, C. F. Roos, and P. Zoller, Nature 569, 355 (2019).
https:/​/​doi.org/​10.1038/​s41586-019-1177-4

[12] A. Kandala, K. Temme, A. D. Córcoles, A. Mezzacapo, J. M. Chow, and J. M. Gambetta, Nature 567, 491 (2019).
https:/​/​doi.org/​10.1038/​s41586-019-1040-7

[13] Google AI Quantum and Collaborators, Science 369, 1084 (2020).
https:/​/​doi.org/​10.1126/​science.abb9811

[14] W. M. Kirby and P. J. Love, Phys. Rev. Lett. 123, 200501 (2019).
https:/​/​doi.org/​10.1103/​PhysRevLett.123.200501

[15] W. M. Kirby and P. J. Love, Phys. Rev. A 102, 032418 (2020).
https:/​/​doi.org/​10.1103/​PhysRevA.102.032418

[16] R. W. Spekkens, Phys. Rev. A 75, 032110 (2007).
https:/​/​doi.org/​10.1103/​PhysRevA.75.032110

[17] R. W. Spekkens, “Quasi-quantization: Classical statistical theories with an epistemic restriction,” in Quantum Theory: Informational Foundations and Foils, edited by G. Chiribella and R. W. Spekkens (Springer Netherlands, Dordrecht, 2016) pp. 83–135.
https:/​/​doi.org/​10.1007/​978-94-017-7303-4_4

[18] K. M. Nakanishi, K. Mitarai, and K. Fujii, Phys. Rev. Research 1, 033062 (2019).
https:/​/​doi.org/​10.1103/​PhysRevResearch.1.033062

[19] J. S. Bell, Physics 1, 195 (1964).
https:/​/​doi.org/​10.1103/​PhysicsPhysiqueFizika.1.195

[20] J. S. Bell, Rev. Mod. Phys. 38, 447 (1966).
https:/​/​doi.org/​10.1103/​RevModPhys.38.447

[21] S. Kochen and E. Specker, J. Math. Mech. 17, 59 (1967).
https:/​/​doi.org/​10.1007/​978-94-010-1795-4_17

[22] R. W. Spekkens, Phys. Rev. A 71, 052108 (2005).
https:/​/​doi.org/​10.1103/​PhysRevA.71.052108

[23] S. Abramsky and A. Brandenburger, New Journal of Physics 13, 113036 (2011).
https:/​/​doi.org/​10.1088/​1367-2630/​13/​11/​113036

[24] R. Raussendorf, Phys. Rev. A 88, 022322 (2013).
https:/​/​doi.org/​10.1103/​PhysRevA.88.022322

[25] M. Howard, J. Wallman, V. Veitch, and J. Emerson, Nature 510, 351 EP (2014).
https:/​/​doi.org/​10.1038/​nature13460

[26] A. Cabello, S. Severini, and A. Winter, Phys. Rev. Lett. 112, 040401 (2014).
https:/​/​doi.org/​10.1103/​PhysRevLett.112.040401

[27] A. Cabello, M. Kleinmann, and C. Budroni, Phys. Rev. Lett. 114, 250402 (2015).
https:/​/​doi.org/​10.1103/​PhysRevLett.114.250402

[28] R. Ramanathan and P. Horodecki, Phys. Rev. Lett. 112, 040404 (2014).
https:/​/​doi.org/​10.1103/​PhysRevLett.112.040404

[29] N. de Silva, Phys. Rev. A 95, 032108 (2017).
https:/​/​doi.org/​10.1103/​PhysRevA.95.032108

[30] B. Amaral and M. T. Cunha, arXiv preprint (2017), arXiv:1709.04812 [quant-ph].
arXiv:1709.04812

[31] Z.-P. Xu and A. Cabello, Phys. Rev. A 99, 020103 (2019).
https:/​/​doi.org/​10.1103/​PhysRevA.99.020103

[32] R. Raussendorf, Quantum Information and Computation 19, 1141 (2019).
https:/​/​doi.org/​10.26421/​QIC19.13-14-4

[33] C. Duarte and B. Amaral, Journal of Mathematical Physics 59, 062202 (2018).
https:/​/​doi.org/​10.1063/​1.5018582

[34] S. Mansfield and E. Kashefi, Phys. Rev. Lett. 121, 230401 (2018).
https:/​/​doi.org/​10.1103/​PhysRevLett.121.230401

[35] C. Okay, E. Tyhurst, and R. Raussendorf, Quantum Information and Computation 18, 1272 (2018).
https:/​/​doi.org/​10.26421/​QIC18.15-16-2

[36] R. Raussendorf, J. Bermejo-Vega, E. Tyhurst, C. Okay, and M. Zurel, Phys. Rev. A 101, 012350 (2020).
https:/​/​doi.org/​10.1103/​PhysRevA.101.012350

[37] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, UK, 2001).
https:/​/​doi.org/​10.1017/​CBO9780511976667

[38] J. Yan and D. Bacon, arXiv preprint (2012), arXiv:1203.3906 [quant-ph].
arXiv:1203.3906

[39] A. F. Izmaylov, T.-C. Yen, R. A. Lang, and V. Verteletskyi, Journal of Chemical Theory and Computation 16, 190 (2020).
https:/​/​doi.org/​10.1021/​acs.jctc.9b00791

[40] A. Zhao, A. Tranter, W. M. Kirby, S. F. Ung, A. Miyake, and P. J. Love, Phys. Rev. A 101, 062322 (2020).
https:/​/​doi.org/​10.1103/​PhysRevA.101.062322

[41] P. Jordan and E. Wigner, Z. Phys. 47, 631 (1928).
https:/​/​doi.org/​10.1007/​BF01331938

[42] S. Bravyi, J. M. Gambetta, A. Mezzacapo, and K. Temme, arXiv preprint (2017), arXiv:1701.08213 [quant-ph].
arXiv:1701.08213

[43] K. Setia, R. Chen, J. E. Rice, A. Mezzacapo, M. Pistoia, and J. D. Whitfield, Journal of Chemical Theory and Computation 16, 6091 (2020).
https:/​/​doi.org/​10.1021/​acs.jctc.0c00113

[44] M. H. Yung, J. Casanova, A. Mezzacapo, J. McClean, L. Lamata, A. Aspuru-Guzik, and E. Solano, Scientific Reports 4, 3589 EP (2014).
https:/​/​doi.org/​10.1038/​srep03589

[45] J. R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, New Journal of Physics 18, 023023 (2016).
https:/​/​doi.org/​10.1088/​1367-2630/​18/​2/​023023

[46] J. Romero, R. Babbush, J. R. McClean, C. Hempel, P. J. Love, and A. Aspuru-Guzik, Quantum Science and Technology 4, 014008 (2018).
https:/​/​doi.org/​10.1088/​2058-9565/​aad3e4

[47] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Nature Communications 9, 4812 (2018).
https:/​/​doi.org/​10.1038/​s41467-018-07090-4

[48] A. Uvarov and J. Biamonte, Journal of Physics A: Mathematical and Theoretical (2021).
https:/​/​doi.org/​10.1088/​1751-8121/​abfac7

[49] M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, Nature Communications 12, 1791 (2021).
https:/​/​doi.org/​10.1038/​s41467-021-21728-w

[50] S. Wang, E. Fontana, M. Cerezo, K. Sharma, A. Sone, L. Cincio, and P. J. Coles, arXiv preprint (2020), arXiv:2007.14384 [quant-ph].
arXiv:2007.14384

[51] M. Motta, C. Sun, A. T. K. Tan, M. J. O’Rourke, E. Ye, A. J. Minnich, F. G. S. L. Brandão, and G. K.-L. Chan, Nature Physics 16, 205 (2020).
https:/​/​doi.org/​10.1038/​s41567-019-0704-4

[52] S. McArdle, T. Jones, S. Endo, Y. Li, S. C. Benjamin, and X. Yuan, npj Quantum Information 5, 75 (2019).
https:/​/​doi.org/​10.1038/​s41534-019-0187-2

[53] V. Verteletskyi, T.-C. Yen, and A. F. Izmaylov, The Journal of Chemical Physics 152, 124114 (2020).
https:/​/​doi.org/​10.1063/​1.5141458

[54] T.-C. Yen, V. Verteletskyi, and A. F. Izmaylov, Journal of Chemical Theory and Computation 16, 2400 (2020).
https:/​/​doi.org/​10.1021/​acs.jctc.0c00008

[55] P. Gokhale, O. Angiuli, Y. Ding, K. Gui, T. Tomesh, M. Suchara, M. Martonosi, and F. T. Chong, arXiv preprint (2019), arXiv:1907.13623 [quant-ph].
arXiv:1907.13623

Cited by

[1] Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S. Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, and Alán Aspuru-Guzik, “Noisy intermediate-scale quantum (NISQ) algorithms”, arXiv:2101.08448.

The above citations are from SAO/NASA ADS (last updated successfully 2021-05-14 12:59:19). The list may be incomplete as not all publishers provide suitable and complete citation data.

Could not fetch Crossref cited-by data during last attempt 2021-05-14 12:59:17: Could not fetch cited-by data for 10.22331/q-2021-05-14-456 from Crossref. This is normal if the DOI was registered recently.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://quantum-journal.org/papers/q-2021-05-14-456/

Continue Reading

Quantum

Scientists Catch Jumping Genes Rewiring Genomes

Avatar

Published

on

Roughly 500 million years ago, something that would forever change the course of eukaryotic development was brewing in the genome of some lucky organism: a gene called Pax6. The gene is thought to have orchestrated the formation of a primitive visual system, and in organisms today, it initiates a genetic cascade that recruits more than 2,000 genes to build different parts of the eye.

Pax6 is only one of thousands of genes encoding transcription factors that each have the powerful ability to amplify and silence thousands of other genes. While geneticists have made leaps in understanding how genes with relatively simple, direct functions could have evolved, explanations for transcription factors have largely eluded scientists. The problem is that the success of a transcription factor depends on how usefully it targets huge numbers of sites throughout the genome simultaneously; it’s hard to picture how natural selection enables that to happen. The answer may hold the key to understanding how complex evolutionary novelties such as eyes arise, said Cédric Feschotte, a molecular biologist at Cornell University.

For more than a decade, Feschotte has pointed to transposons as the ultimate innovators in eukaryotic genomes. Transposons are genetic elements that can copy themselves and insert those copies throughout the genome using a splicing enzyme they make. Feschotte may have finally found the smoking gun he has been looking for: As he and his colleagues recently reported in Science, these jumping genes have fused with other genes nearly 100 times in tetrapods over the past 300 million years, and many of the resulting genetic mashups are likely to encode transcription factors.

The study provides a plausible explanation for how so-called master regulators like Pax6 could have been born, said Rachel Cosby, the first author of the new study, who was a doctoral student in Feschotte’s lab and is now a postdoc at the National Institutes of Health. Although scientists had theorized that Pax6 arose from a transposon hundreds of millions of years ago, mutations since that time have obscured clues about how it formed. “We could see that it was probably derived from a transposon, but it happened so long ago that we missed the window to see how it evolved,” she said.

David Adelson, chair of bioinformatics and computational genetics at the University of Adelaide in Australia, who was not involved with the study, said, “This study provides a good mechanistic understanding of how these new genes can form, and it squarely implicates the transposon activity itself as the cause.”

Scientists have long known that transposons can fuse with established genes because they have seen the unique genetic signatures of transposons in a handful of them, but the precise mechanism behind these unlikely fusion events has largely been unknown. By analyzing genes with transposon signatures from nearly 600 tetrapods, the researchers found 106 distinct genes that may have fused with a transposon. The human genome carries 44 genes likely to have been born this way.

The structure of genes in eukaryotes is complicated, because their blueprints for making proteins are broken up by introns. These noncoding sequences are transcribed, but they get snipped out of the messenger RNA transcripts before translation into protein occurs. But according to Feschotte’s new study, a transposon can occasionally hop into an intron and change what gets translated. In some of these cases, the protein made by the fusion gene is a mashup of the original product and the transposon’s splicing enzyme (transposase).

Once the fusion protein is created, “it has a ready-made set of potential binding sites scattered all over the genome,” Adelson said, because its transposase part is still drawn to transposons. The more potential binding sites for the fusion protein, the higher the likelihood that it changes gene expression in the cell, potentially giving rise to new functions.

“These aren’t just new genes, but entire new architectures for proteins,” Feschotte said.

Cosby described the 106 fusion genes described in the study as the “tiniest tip of the iceberg.” Adelson agreed and explained why: Events that randomly create fusion genes for functional, non-harmful proteins rely on a series of coincidences and must be exceedingly rare; for the fusion genes to spread throughout a population and withstand the test of time, nature must also positively select for them in some way. For the researchers to have found the examples described in the study so readily, transposons must surely cause fusion events much more often, he said.

“All of these steps are very unlikely to happen, but this is how evolution works,” Feschotte said. “It’s very quirky, opportunistic and very unlikely in the end, yet you see it happen over and over again on the timescales of hundreds of millions of years.”

To test whether the fusion genes acted as transcription factors, Cosby and her colleagues homed in on one that evolved in bats 25 million to 45 million years ago — a blink of an eye in evolutionary time. When they used CRISPR to delete it from the bat genome, the changes were striking: The removal dysregulated hundreds of genes. As soon as they restored it, normal gene activity resumed.

To Adelson, this shows that Cosby and her co-authors practically “caught one of these fusion events in the act.” He added, “It’s especially surprising because you wouldn’t expect a new transcription factor to cause wholesale rewiring of transcriptional networks if it had been acquired relatively recently.”

Although the researchers didn’t determine the function of the other fusion proteins definitively, the genetic hallmarks of transcription factors are there: Around a third of the fusion proteins contain a part called KRAB that is associated with repressing DNA transcription in animals. Why transposases tended to fuse with KRAB-encoding genes is a mystery, Feschotte said.

Transposons comprise a hefty chunk of eukaryotic DNA, yet organisms take extreme measures to carefully regulate their activity and prevent the havoc caused by problems such as genomic instability and harmful mutations. These dangers made Adelson wonder if fusion genes sometimes endanger orderly gene regulation. “Not only are you perturbing one thing, but you’re perturbing this whole cascade of things,” he said. “How is it that you can change expression of all these things and not have a three-headed bat?” Cosby, however, thinks it’s unlikely that a fusion gene leading to harmful morphogenic changes would readily propagate through a population.

Damon Lisch, a plant geneticist at Purdue University who studies transposable elements and was not involved with the study, said he hopes this study pushes back against a widespread but misguided notion that transposons are “junk DNA.” Transposable elements generate tremendous amounts of diversity and have been implicated in the evolution of the placenta and the adaptive immune system, he explained. “These are not junk — they’re living little creatures in your genome that are under very active selection over long periods of time, and what that means is that they evolve new functions to stay in your genome,” he said.

Though this study highlights the mechanism underlying transposase fusion genes, the vast majority of new genetic material is thought to form through genetic duplication, in which genes are accidentally copied and the extras diverge through mutation. But a large quantity of genetic material does not mean that new protein functions will be significant, said Cosby, who is continuing to investigate the function of the fusion proteins.

“Evolution is the ultimate tinkerer and ultimate opportunist,” said David Schatz, a molecular geneticist at Yale University who was not involved with the study. “If you give evolution a tool, it may not use it right away, but sooner or later it will take advantage of it.”

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.quantamagazine.org/scientists-catch-jumping-genes-rewiring-genomes-20210512/

Continue Reading

Quantum

New Black Hole Math Closes Cosmic Blind Spot

Avatar

Published

on

Last year, just for the heck of it, Scott Field and Gaurav Khanna tried something that wasn’t supposed to work. The fact that it actually worked quite well is already starting to make some ripples.

Field and Khanna are researchers who try to figure out what black hole collisions should look like. These violent events don’t produce flashes of light, but rather the faint vibrations of gravitational waves, the quivers of space-time itself. But observing them is not as simple as sitting back and waiting for space to ring like a bell. To pick out such signals, researchers must constantly compare the data from gravitational wave detectors to the output of various mathematical models — calculations that reveal the potential signatures of a black hole collision. Without reliable models, astronomers wouldn’t have a clue what to look for.

The trouble is, the most trustworthy models come from Einstein’s general theory of relativity, which is described by 10 interlinked equations that are notoriously difficult to solve. To chronicle the complex interactions between colliding black holes, you can’t just use a pen and paper. The first so-called numerical relativity solutions to the Einstein equations for the case of a black hole merger were calculated only in 2005 — after decades of attempts. They required a supercomputer running on and off for two months.

A gravitational wave observatory like LIGO needs to have a large number of solutions to draw upon. In a perfect world, physicists could just run their model for every possible merger permutation — a black hole with a certain mass and spin encountering another with a different mass and spin — and compare those results with what the detector sees. But the calculations take a long time. “If you give me a big enough computer and enough time, you can model almost anything,” said Scott Hughes, a physicist at the Massachusetts Institute of Technology. “But there’s a practical issue. The amount of computer time is really exorbitant” — weeks or months on a supercomputer. And if those black holes are unevenly sized? The calculations would take so long that researchers consider the task practically impossible. Because of that, physicists are effectively unable to spot collisions between black holes with mass ratios greater than 10-to-1.

Which is one reason why Field and Khanna’s new work is so exciting. Field, a mathematician at the University of Massachusetts, Dartmouth, and Khanna, a physicist at the University of Rhode Island, have made an assumption that simplifies matters greatly: They treat the smaller black hole as a “point particle” — a speck of dust, an object with mass but zero radius and no event horizon.

“It’s like two ships passing in the ocean — one a rowboat, the other a cruise liner,” Field explained. “You wouldn’t expect the rowboat to affect the cruise liner’s trajectory in any way. We’re saying the small ship, the rowboat, can be completely ignored in this transaction.”

They expected it to work when the smaller black hole’s mass really was like a rowboat’s compared to a cruise ship’s. “If the mass ratio is on the order of 10,000-to-1, we feel very confident in making that approximation,” Khanna said.

But in research published last year, he and Field, along with graduate student Nur Rifat and Cornell physicist Vijay Varma, decided to test their model at mass ratios all the way down to 3-to-1 — a ratio so low it had never been tried, mainly because no one considered it worth trying. They found that even at this low extreme, their model agreed, to within about 1%, with results obtained by solving the full set of Einstein’s equations — an astounding level of accuracy.

“That’s when I really started to pay attention,” said Hughes. Their results at mass ratio 3, he added, were “pretty incredible.”

“It’s an important result,” said Niels Warburton, a physicist at University College Dublin who was not involved with the research.

The success of Field and Khanna’s model down to ratios of 3-to-1 gives researchers that much more confidence in using it at ratios of 10-to-1 and above. The hope is that this model, or one like it, could operate in regimes where numerical relativity cannot, allowing researchers to scrutinize a part of the universe that has been largely impenetrable.

How to Find a Black Hole

After black holes spiral toward each other and collide, the massive bodies create space-time-contorting disturbances — gravitational waves — that propagate through the universe. Eventually, some of these gravitational waves might reach Earth, where the LIGO and Virgo observatories wait. These enormous L-shaped detectors can sense the truly tiny stretching or squishing of space-time that these waves create — a shift 10,000 times smaller than the width of a proton.

The designers of these observatories have made herculean efforts to muffle stray noise, but when your signal is so weak, noise is a constant companion.

The first task in any gravitational wave detection is to try to extract a weak signal from that noise. Field compares the process to “driving in a car with a loud muffler and a lot of static on the radio, while thinking there might be a song, a faint melody, somewhere in that noisy background.”

Astronomers take the incoming stream of data and first ask if any of it is consistent with a previously modeled gravitational wave form. They might run this preliminary comparison against tens of thousands of signals stored in their “template bank.” Researchers can’t determine the exact black hole characteristics from this procedure. They’re just trying to figure out if there’s a song on the radio.

The next step is analogous to identifying the song and determining who sang it and what instruments are playing. Researchers run tens of millions of simulations to compare the observed signal, or wave form, with those produced by black holes of differing masses and spins. This is where researchers can really nail down the details. The frequency of the gravitational wave tells you the total mass of the system. How that frequency changes over time reveals the mass ratio, and thus the masses of the individual black holes. The rate of change in the frequency also provides information about a black hole’s spin. Finally, the amplitude (or height) of the detected wave can reveal how far the system is from our telescopes on Earth.

If you have to do tens of millions of simulations, they’d better be quick. “To complete that in a day, you need to do each in about a millisecond,” said Rory Smith, an astronomer at Monash University and a member of the LIGO collaboration. Yet the time needed to run a single numerical relativity simulation — one that faithfully grinds its way through the Einstein equations — is measured in days, weeks or even months.

To speed up this process, researchers typically start with the results of full supercomputer simulations — of which several thousand have been carried out so far. They then use machine learning strategies to interpolate their data, Smith said, “filling in the gaps and mapping out the full space of possible simulations.”

This “surrogate modeling” approach works well so long as the interpolated data doesn’t stray too far from the baseline simulations. But simulations for collisions with a high mass ratio are incredibly difficult. “The bigger the mass ratio, the more slowly the system of two inspiraling black holes takes to evolve,” Warburton explained. For a typical low-mass-ratio computation, you need to look at 20 to 40 orbits before the black holes plunge together, he said. “For a mass ratio of 1,000, you need to look at 1,000 orbits, and that would just take too long” — on the order of years. This makes the task virtually “impossible, even if you have a supercomputer at your disposal,” Field said. “And without a revolutionary breakthrough, this won’t be possible in the near future either.”

Because of this, many of the full simulations used in surrogate modeling are between the mass ratios of 1 and 4; almost all are less than 10.  When LIGO and Virgo detected a merger with a mass ratio of 9 in 2019, it was right at the limit of their sensitivity. More events like this haven’t been found, Khanna explained, because “we don’t have reliable models from supercomputers for mass ratios above 10. We haven’t been looking because we don’t have the templates.”

That’s where the model that he and Khanna have developed comes in. They started with their own point particle approximation model, which is specially designed to operate in the mass ratio range above 10. They then trained a surrogate model on it.  The work opens up opportunities to detect the mergers of unevenly sized black holes.

What kinds of situations might create such mergers? Researchers aren’t sure, since this is a newly opening frontier of the universe. But there are a few possibilities.

First, astronomers can imagine an intermediate-mass black hole of perhaps 80 or 100 solar masses colliding with a smaller, stellar-size black hole of about 5 solar masses.

Another possibility would involve a collision between a garden-variety stellar black hole and a relatively puny black hole left over from the Big Bang — a “primordial” black hole. These could have as little as 1% of a solar mass, whereas the vast majority of black holes detected by LIGO so far weigh more than 10 solar masses.

Earlier this year, researchers at the Max Planck Institute for Gravitational Physics used Field and Khanna’s surrogate model to look through LIGO data for signs of gravitational waves emanating from mergers involving primordial black holes. And while they didn’t find any, they were able to place more precise limits on the possible abundance of this hypothetical class of black holes.

Furthermore, LISA, a planned space-based gravitational wave observatory, might one day be able to witness mergers between ordinary black holes and the supermassive varieties at the centers of galaxies — some with the mass of a billion or more suns. LISA’s future is uncertain; its earliest launch date is 2035, and its funding situation is still unclear. But if and when it does launch, we may see mergers at mass ratios above 1 million.

The Breaking Point

Some in the field, including Hughes, have described the new model’s success as “the unreasonable effectiveness of point particle approximations,” underscoring the fact that the model’s effectiveness at low mass ratios poses a genuine mystery. Why should researchers be able to ignore the critical details of the smaller black hole and still arrive at the right answer?

“It’s telling us something about the underlying physics,” Khanna said, though exactly what that is remains a source of curiosity. “We don’t have to concern ourselves with two objects surrounded by event horizons that can get distorted and interact with each other in strange ways.” But no one knows why.

In the absence of answers, Field and Khanna are trying to extend their model to more realistic situations. In a paper scheduled to be posted early this summer on the preprint server arxiv.org, the researchers give the larger black hole some spin, which is expected in an astrophysically realistic situation. Again, their model closely matches the findings of numerical relativity simulations at mass ratios down to 3.

They next plan to consider black holes that approach each other on elliptical rather than perfectly circular orbits. They’re also planning, in concert with Hughes, to introduce the notion of “misaligned orbits” — cases in which the black holes are askew relative to each other, orbiting in different geometric planes.

Finally, they’re hoping to learn from their model by trying to make it break. Could it work at a mass ratio of 2 or lower? Field and Khanna want to find out. “One gains confidence in an approximation method when one sees it fail,” said Richard Price, a physicist at MIT. “When you do an approximation that gets surprisingly good results, you wonder if you are somehow cheating, unconsciously using a result that you shouldn’t have access to.” If Field and Khanna push their model to the breaking point, he added, “then you’d really know that what you are doing is not cheating — that you just have an approximation that works better than you’d expect.”

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.quantamagazine.org/new-black-hole-math-closes-cosmic-blind-spot-20210512/

Continue Reading

Quantum

How Mathematicians Use Homology to Make Sense of Topology

Avatar

Published

on

At first, topology can seem like an unusually imprecise branch of mathematics. It’s the study of squishy play-dough shapes capable of bending, stretching and compressing without limit. But topologists do have some restrictions: They cannot create or destroy holes within shapes. (It’s an old joke that topologists can’t tell the difference between a coffee mug and a doughnut, since they both have one hole.) While this might seem like a far cry from the rigors of algebra, a powerful idea called homology helps mathematicians connect these two worlds.

The word “hole” has many meanings in everyday speech — bubbles, rubber bands and bowls all have different kinds of holes. Mathematicians are interested in detecting a specific type of hole, which can be described as a closed and hollow space. A one-dimensional hole looks like a rubber band. The squiggly line that forms a rubber band is closed (unlike a loose piece of string) and hollow (unlike the perimeter of a penny).

Extending this logic, a two-dimensional hole looks like a hollow ball. The kinds of holes mathematicians are looking for — closed and hollow — are found in basketballs, but not bowls or bowling balls.

But mathematics traffics in rigor, and while thinking about holes this way may help point our intuition toward rubber bands and basketballs, it isn’t precise enough to qualify as a mathematical definition. It doesn’t clearly describe holes in higher dimensions, for instance, and you couldn’t program a computer to distinguish closed and hollow spaces.

“There’s not a good definition of a hole,” said Jose Perea of Michigan State University.

So instead, homology infers an object’s holes from its boundaries, a more precise mathematical concept. To study the holes in an object, mathematicians only need information about its boundaries.

The boundary of a shape is the collection of the points on its periphery, and a shape’s boundary is always one dimension lower than the shape itself. For example, the boundary of a one-dimensional line segment consists of the two points on either end. (Points are considered zero-dimensional.) The boundary of a solid triangle is the hollow triangle, which consists of one-dimensional edges. Similarly, the solid pyramid is bounded by a hollow pyramid.

If you stick two line segments together, the boundary points where they meet disappear. The boundary points are like the edge of a cliff — they are close to falling off the line. But when you connect the lines, the points that were on the edges are now securely in the center. Separately, the two lines had four total boundary points, but when they are stuck together, the resulting shape only has two boundary points.

If you can attach a third edge and close off the structure, creating a hollow triangle, then the boundary points disappear entirely. Each boundary point of the component edges cancels with another, and the hollow triangle is left with no boundary. So whenever a collection of lines forms a loop, the boundaries cancel.

Loops circle back on themselves, enclosing a central region. But the loop only forms a hole if the central region is hollow, as with a rubber band. A circle drawn on a paper forms a loop, but it is not a hole because the center is filled in. Loops that enclose a solid region — the non-hole kind — are the boundary of that two-dimensional region.

Therefore, holes have two important rigorous features. First, a hole has no boundary, because it forms a closed shape. And second, a hole is not the boundary of something else, because the hole itself must be hollow.

This definition can extend to higher dimensions. A two-dimensional solid triangle is bounded by three edges. If you attach several triangles together, some boundary edges disappear. When four triangles are arranged into a pyramid, each of the edges cancels with another one. So the walls of a pyramid have no boundary. If that pyramid is hollow — that is, it is not the boundary of a three-dimensional solid block — then it forms a two-dimensional hole.

To find all the types of holes within a particular topological shape, mathematicians build something called a chain complex, which forms the scaffolding of homology.

Many topological shapes can be built by gluing together pieces of different dimensions. The chain complex is a diagram that gives the assembly instructions for a shape. Individual pieces of the shape are grouped by dimension and then arranged hierarchically: The first level contains all the points, the next level contains all the lines, and so on. (There’s also an empty zeroth level, which simply serves as a foundation.) Each level is connected to the one below it by arrows, which indicate how they are glued together. For example, a solid triangle is linked to the three edges that form its boundary.

Mathematicians extract a shape’s homology from its chain complex, which provides structured data about the shape’s component parts and their boundaries — exactly what you need to describe holes in every dimension. When you use the chain complex, the processes for finding a 10-dimensional hole and a one-dimensional hole are nearly identical (except that one is much harder to visualize than the other).

The definition of homology is rigid enough that a computer can use it to find and count holes, which helps establish the rigor typically required in mathematics. It also allows researchers to use homology for an increasingly popular pursuit: analyzing data.

That’s because data can be visualized as points floating in space. These data points can represent the locations of physical objects, such as sensors, or positions in an abstract space, such as a description of food preferences, with nearby points indicating people who have a similar palate.

To form shapes from data, mathematicians draw lines between neighboring points. When three points are close together, they are filled in to form a solid triangle. When larger numbers of points are clustered together, they form more complicated and higher-dimensional shapes. Filling in the data points gives them texture and volume — it creates an image from the dots.

Homology translates this world of vague shapes into the rigorous world of algebra, a branch of mathematics that studies particular numerical structures and symmetries. Mathematicians study the properties of these algebraic structures in a field known as homological algebra. From the algebra they indirectly learn information about the original topological shape of the data. Homology comes in many varieties, all of which connect with algebra.

“Homology is a familiar construction. We have a lot of algebraic things we know about it,” said Maggie Miller of the Massachusetts Institute of Technology.

The information provided by homology even accounts for the imprecision of data: If the data shifts just slightly, the numbers of holes should stay the same. And when large amounts of data are processed, the holes can reveal important features. For example, loops in time-varying data can indicate periodicity. Holes in other dimensions can show clusters and voids in the data.

“There’s a real impetus to have methods that are robust and that are pulling out qualitative features,” said Robert Ghrist of the University of Pennsylvania. “That’s what homology gives you.”

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.quantamagazine.org/how-mathematicians-use-homology-to-make-sense-of-topology-20210511/

Continue Reading
AI5 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Esports4 days ago

‘Destroy Sandcastles’ in Fortnite Locations Explained

Blockchain4 days ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

Blockchain5 days ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

Esports5 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

Blockchain5 days ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

Blockchain4 days ago

Texas House Passes Bill that Recognizes Crypto Under Commercial Law

Aviation4 days ago

American Airlines Continues To Build Up Its Core Hub Strategy

Aviation5 days ago

Reuters: American Airlines adds stops to two flights after pipeline outage

ACN Newswire5 days ago

Duet Protocol closes first-round funding at US$3 million

Cyber Security5 days ago

Pending Data Protection and Security Laws At-A-Glance: APAC

AI5 days ago

Onestream: Data analysis, AI tools usage increased in 2021

Blockchain5 days ago

QAN Raises $2.1 Million in Venture Capital to Build DeFi Ecosystem

Blockchain4 days ago

Facebook’s Diem Enters Crypto Space With Diem USD Stablecoin

Esports4 days ago

Video: s1mple – MVP of DreamHack Masters Spring 2021

Business Insider5 days ago

Rally Expected To Stall For China Stock Market

Blockchain4 days ago

NSAV ANNOUNCES LAUNCH OF VIRTUABROKER’S PROPRIETARY CRYPTOCURRENCY PRICE SEARCH FEATURE

Business Insider4 days ago

HDI Announces Voting Results for Annual General and Special Meeting

AR/VR1 day ago

Next Dimension Podcast – Pico Neo 3, PSVR 2, HTC Vive Pro 2 & Vive Focus 3!

Esports4 days ago

TiMi Studios partners with Xbox Game Studios to bring a “new game sensory experience” to players

Trending