Connect with us


Technical Roadmap for Quantum Computing




This report aims to show the technical steps needed to build a fully functional quantum computer. We give an overview of the subject, and review leading technologies to realise such a computer. We include an estimate of the resources needed for real world problems and address the most common concerns. We also discuss the possible applications that would become available during the process towards building a fully universal quantum computer, i.e. what you can achieve with a “small” quantum computer. These applications apply to fields such as physics and chemistry simulations, encryption, and optimisation.

We hope that this technical report will be helpful to those who want to understand, engage, develop, manufacture or invest in this technology.

Background to the Technical Roadmap

Recently, we have seen a substantial increase in the momentum for quantum computing research, with the promise of performing previously impossible computing tasks, leading to a race to realise the world’s first universal quantum computing machine.

This progress has opened up many commercial opportunities, creating a substantial amount of interest from industry. Through the interactions with the Networked Quantum Information Technologies (NQIT) Hub and the UK National Quantum Technologies Programme industrial networks, many of our industry partners have told us that the subject of quantum computing is complex and that there is need for a detailed roadmap in order to clarify its technical development and potential applications. Such a roadmap would help them to understand the status of this technology and make relevant business decisions. The Technical Roadmap for Fault-Tolerant Quantum Computing is our response to this request.

NQIT Technical Roadmap

Download the Technical Roadmap for Fault-Tolerant Quantum Computing.


Continue Reading


Scientists Catch Jumping Genes Rewiring Genomes




Roughly 500 million years ago, something that would forever change the course of eukaryotic development was brewing in the genome of some lucky organism: a gene called Pax6. The gene is thought to have orchestrated the formation of a primitive visual system, and in organisms today, it initiates a genetic cascade that recruits more than 2,000 genes to build different parts of the eye.

Pax6 is only one of thousands of genes encoding transcription factors that each have the powerful ability to amplify and silence thousands of other genes. While geneticists have made leaps in understanding how genes with relatively simple, direct functions could have evolved, explanations for transcription factors have largely eluded scientists. The problem is that the success of a transcription factor depends on how usefully it targets huge numbers of sites throughout the genome simultaneously; it’s hard to picture how natural selection enables that to happen. The answer may hold the key to understanding how complex evolutionary novelties such as eyes arise, said Cédric Feschotte, a molecular biologist at Cornell University.

For more than a decade, Feschotte has pointed to transposons as the ultimate innovators in eukaryotic genomes. Transposons are genetic elements that can copy themselves and insert those copies throughout the genome using a splicing enzyme they make. Feschotte may have finally found the smoking gun he has been looking for: As he and his colleagues recently reported in Science, these jumping genes have fused with other genes nearly 100 times in tetrapods over the past 300 million years, and many of the resulting genetic mashups are likely to encode transcription factors.

The study provides a plausible explanation for how so-called master regulators like Pax6 could have been born, said Rachel Cosby, the first author of the new study, who was a doctoral student in Feschotte’s lab and is now a postdoc at the National Institutes of Health. Although scientists had theorized that Pax6 arose from a transposon hundreds of millions of years ago, mutations since that time have obscured clues about how it formed. “We could see that it was probably derived from a transposon, but it happened so long ago that we missed the window to see how it evolved,” she said.

David Adelson, chair of bioinformatics and computational genetics at the University of Adelaide in Australia, who was not involved with the study, said, “This study provides a good mechanistic understanding of how these new genes can form, and it squarely implicates the transposon activity itself as the cause.”

Scientists have long known that transposons can fuse with established genes because they have seen the unique genetic signatures of transposons in a handful of them, but the precise mechanism behind these unlikely fusion events has largely been unknown. By analyzing genes with transposon signatures from nearly 600 tetrapods, the researchers found 106 distinct genes that may have fused with a transposon. The human genome carries 44 genes likely to have been born this way.

The structure of genes in eukaryotes is complicated, because their blueprints for making proteins are broken up by introns. These noncoding sequences are transcribed, but they get snipped out of the messenger RNA transcripts before translation into protein occurs. But according to Feschotte’s new study, a transposon can occasionally hop into an intron and change what gets translated. In some of these cases, the protein made by the fusion gene is a mashup of the original product and the transposon’s splicing enzyme (transposase).

Once the fusion protein is created, “it has a ready-made set of potential binding sites scattered all over the genome,” Adelson said, because its transposase part is still drawn to transposons. The more potential binding sites for the fusion protein, the higher the likelihood that it changes gene expression in the cell, potentially giving rise to new functions.

“These aren’t just new genes, but entire new architectures for proteins,” Feschotte said.

Cosby described the 106 fusion genes described in the study as the “tiniest tip of the iceberg.” Adelson agreed and explained why: Events that randomly create fusion genes for functional, non-harmful proteins rely on a series of coincidences and must be exceedingly rare; for the fusion genes to spread throughout a population and withstand the test of time, nature must also positively select for them in some way. For the researchers to have found the examples described in the study so readily, transposons must surely cause fusion events much more often, he said.

“All of these steps are very unlikely to happen, but this is how evolution works,” Feschotte said. “It’s very quirky, opportunistic and very unlikely in the end, yet you see it happen over and over again on the timescales of hundreds of millions of years.”

To test whether the fusion genes acted as transcription factors, Cosby and her colleagues homed in on one that evolved in bats 25 million to 45 million years ago — a blink of an eye in evolutionary time. When they used CRISPR to delete it from the bat genome, the changes were striking: The removal dysregulated hundreds of genes. As soon as they restored it, normal gene activity resumed.

To Adelson, this shows that Cosby and her co-authors practically “caught one of these fusion events in the act.” He added, “It’s especially surprising because you wouldn’t expect a new transcription factor to cause wholesale rewiring of transcriptional networks if it had been acquired relatively recently.”

Although the researchers didn’t determine the function of the other fusion proteins definitively, the genetic hallmarks of transcription factors are there: Around a third of the fusion proteins contain a part called KRAB that is associated with repressing DNA transcription in animals. Why transposases tended to fuse with KRAB-encoding genes is a mystery, Feschotte said.

Transposons comprise a hefty chunk of eukaryotic DNA, yet organisms take extreme measures to carefully regulate their activity and prevent the havoc caused by problems such as genomic instability and harmful mutations. These dangers made Adelson wonder if fusion genes sometimes endanger orderly gene regulation. “Not only are you perturbing one thing, but you’re perturbing this whole cascade of things,” he said. “How is it that you can change expression of all these things and not have a three-headed bat?” Cosby, however, thinks it’s unlikely that a fusion gene leading to harmful morphogenic changes would readily propagate through a population.

Damon Lisch, a plant geneticist at Purdue University who studies transposable elements and was not involved with the study, said he hopes this study pushes back against a widespread but misguided notion that transposons are “junk DNA.” Transposable elements generate tremendous amounts of diversity and have been implicated in the evolution of the placenta and the adaptive immune system, he explained. “These are not junk — they’re living little creatures in your genome that are under very active selection over long periods of time, and what that means is that they evolve new functions to stay in your genome,” he said.

Though this study highlights the mechanism underlying transposase fusion genes, the vast majority of new genetic material is thought to form through genetic duplication, in which genes are accidentally copied and the extras diverge through mutation. But a large quantity of genetic material does not mean that new protein functions will be significant, said Cosby, who is continuing to investigate the function of the fusion proteins.

“Evolution is the ultimate tinkerer and ultimate opportunist,” said David Schatz, a molecular geneticist at Yale University who was not involved with the study. “If you give evolution a tool, it may not use it right away, but sooner or later it will take advantage of it.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


New Black Hole Math Closes Cosmic Blind Spot




Last year, just for the heck of it, Scott Field and Gaurav Khanna tried something that wasn’t supposed to work. The fact that it actually worked quite well is already starting to make some ripples.

Field and Khanna are researchers who try to figure out what black hole collisions should look like. These violent events don’t produce flashes of light, but rather the faint vibrations of gravitational waves, the quivers of space-time itself. But observing them is not as simple as sitting back and waiting for space to ring like a bell. To pick out such signals, researchers must constantly compare the data from gravitational wave detectors to the output of various mathematical models — calculations that reveal the potential signatures of a black hole collision. Without reliable models, astronomers wouldn’t have a clue what to look for.

The trouble is, the most trustworthy models come from Einstein’s general theory of relativity, which is described by 10 interlinked equations that are notoriously difficult to solve. To chronicle the complex interactions between colliding black holes, you can’t just use a pen and paper. The first so-called numerical relativity solutions to the Einstein equations for the case of a black hole merger were calculated only in 2005 — after decades of attempts. They required a supercomputer running on and off for two months.

A gravitational wave observatory like LIGO needs to have a large number of solutions to draw upon. In a perfect world, physicists could just run their model for every possible merger permutation — a black hole with a certain mass and spin encountering another with a different mass and spin — and compare those results with what the detector sees. But the calculations take a long time. “If you give me a big enough computer and enough time, you can model almost anything,” said Scott Hughes, a physicist at the Massachusetts Institute of Technology. “But there’s a practical issue. The amount of computer time is really exorbitant” — weeks or months on a supercomputer. And if those black holes are unevenly sized? The calculations would take so long that researchers consider the task practically impossible. Because of that, physicists are effectively unable to spot collisions between black holes with mass ratios greater than 10-to-1.

Which is one reason why Field and Khanna’s new work is so exciting. Field, a mathematician at the University of Massachusetts, Dartmouth, and Khanna, a physicist at the University of Rhode Island, have made an assumption that simplifies matters greatly: They treat the smaller black hole as a “point particle” — a speck of dust, an object with mass but zero radius and no event horizon.

“It’s like two ships passing in the ocean — one a rowboat, the other a cruise liner,” Field explained. “You wouldn’t expect the rowboat to affect the cruise liner’s trajectory in any way. We’re saying the small ship, the rowboat, can be completely ignored in this transaction.”

They expected it to work when the smaller black hole’s mass really was like a rowboat’s compared to a cruise ship’s. “If the mass ratio is on the order of 10,000-to-1, we feel very confident in making that approximation,” Khanna said.

But in research published last year, he and Field, along with graduate student Nur Rifat and Cornell physicist Vijay Varma, decided to test their model at mass ratios all the way down to 3-to-1 — a ratio so low it had never been tried, mainly because no one considered it worth trying. They found that even at this low extreme, their model agreed, to within about 1%, with results obtained by solving the full set of Einstein’s equations — an astounding level of accuracy.

“That’s when I really started to pay attention,” said Hughes. Their results at mass ratio 3, he added, were “pretty incredible.”

“It’s an important result,” said Niels Warburton, a physicist at University College Dublin who was not involved with the research.

The success of Field and Khanna’s model down to ratios of 3-to-1 gives researchers that much more confidence in using it at ratios of 10-to-1 and above. The hope is that this model, or one like it, could operate in regimes where numerical relativity cannot, allowing researchers to scrutinize a part of the universe that has been largely impenetrable.

How to Find a Black Hole

After black holes spiral toward each other and collide, the massive bodies create space-time-contorting disturbances — gravitational waves — that propagate through the universe. Eventually, some of these gravitational waves might reach Earth, where the LIGO and Virgo observatories wait. These enormous L-shaped detectors can sense the truly tiny stretching or squishing of space-time that these waves create — a shift 10,000 times smaller than the width of a proton.

The designers of these observatories have made herculean efforts to muffle stray noise, but when your signal is so weak, noise is a constant companion.

The first task in any gravitational wave detection is to try to extract a weak signal from that noise. Field compares the process to “driving in a car with a loud muffler and a lot of static on the radio, while thinking there might be a song, a faint melody, somewhere in that noisy background.”

Astronomers take the incoming stream of data and first ask if any of it is consistent with a previously modeled gravitational wave form. They might run this preliminary comparison against tens of thousands of signals stored in their “template bank.” Researchers can’t determine the exact black hole characteristics from this procedure. They’re just trying to figure out if there’s a song on the radio.

The next step is analogous to identifying the song and determining who sang it and what instruments are playing. Researchers run tens of millions of simulations to compare the observed signal, or wave form, with those produced by black holes of differing masses and spins. This is where researchers can really nail down the details. The frequency of the gravitational wave tells you the total mass of the system. How that frequency changes over time reveals the mass ratio, and thus the masses of the individual black holes. The rate of change in the frequency also provides information about a black hole’s spin. Finally, the amplitude (or height) of the detected wave can reveal how far the system is from our telescopes on Earth.

If you have to do tens of millions of simulations, they’d better be quick. “To complete that in a day, you need to do each in about a millisecond,” said Rory Smith, an astronomer at Monash University and a member of the LIGO collaboration. Yet the time needed to run a single numerical relativity simulation — one that faithfully grinds its way through the Einstein equations — is measured in days, weeks or even months.

To speed up this process, researchers typically start with the results of full supercomputer simulations — of which several thousand have been carried out so far. They then use machine learning strategies to interpolate their data, Smith said, “filling in the gaps and mapping out the full space of possible simulations.”

This “surrogate modeling” approach works well so long as the interpolated data doesn’t stray too far from the baseline simulations. But simulations for collisions with a high mass ratio are incredibly difficult. “The bigger the mass ratio, the more slowly the system of two inspiraling black holes takes to evolve,” Warburton explained. For a typical low-mass-ratio computation, you need to look at 20 to 40 orbits before the black holes plunge together, he said. “For a mass ratio of 1,000, you need to look at 1,000 orbits, and that would just take too long” — on the order of years. This makes the task virtually “impossible, even if you have a supercomputer at your disposal,” Field said. “And without a revolutionary breakthrough, this won’t be possible in the near future either.”

Because of this, many of the full simulations used in surrogate modeling are between the mass ratios of 1 and 4; almost all are less than 10.  When LIGO and Virgo detected a merger with a mass ratio of 9 in 2019, it was right at the limit of their sensitivity. More events like this haven’t been found, Khanna explained, because “we don’t have reliable models from supercomputers for mass ratios above 10. We haven’t been looking because we don’t have the templates.”

That’s where the model that he and Khanna have developed comes in. They started with their own point particle approximation model, which is specially designed to operate in the mass ratio range above 10. They then trained a surrogate model on it.  The work opens up opportunities to detect the mergers of unevenly sized black holes.

What kinds of situations might create such mergers? Researchers aren’t sure, since this is a newly opening frontier of the universe. But there are a few possibilities.

First, astronomers can imagine an intermediate-mass black hole of perhaps 80 or 100 solar masses colliding with a smaller, stellar-size black hole of about 5 solar masses.

Another possibility would involve a collision between a garden-variety stellar black hole and a relatively puny black hole left over from the Big Bang — a “primordial” black hole. These could have as little as 1% of a solar mass, whereas the vast majority of black holes detected by LIGO so far weigh more than 10 solar masses.

Earlier this year, researchers at the Max Planck Institute for Gravitational Physics used Field and Khanna’s surrogate model to look through LIGO data for signs of gravitational waves emanating from mergers involving primordial black holes. And while they didn’t find any, they were able to place more precise limits on the possible abundance of this hypothetical class of black holes.

Furthermore, LISA, a planned space-based gravitational wave observatory, might one day be able to witness mergers between ordinary black holes and the supermassive varieties at the centers of galaxies — some with the mass of a billion or more suns. LISA’s future is uncertain; its earliest launch date is 2035, and its funding situation is still unclear. But if and when it does launch, we may see mergers at mass ratios above 1 million.

The Breaking Point

Some in the field, including Hughes, have described the new model’s success as “the unreasonable effectiveness of point particle approximations,” underscoring the fact that the model’s effectiveness at low mass ratios poses a genuine mystery. Why should researchers be able to ignore the critical details of the smaller black hole and still arrive at the right answer?

“It’s telling us something about the underlying physics,” Khanna said, though exactly what that is remains a source of curiosity. “We don’t have to concern ourselves with two objects surrounded by event horizons that can get distorted and interact with each other in strange ways.” But no one knows why.

In the absence of answers, Field and Khanna are trying to extend their model to more realistic situations. In a paper scheduled to be posted early this summer on the preprint server, the researchers give the larger black hole some spin, which is expected in an astrophysically realistic situation. Again, their model closely matches the findings of numerical relativity simulations at mass ratios down to 3.

They next plan to consider black holes that approach each other on elliptical rather than perfectly circular orbits. They’re also planning, in concert with Hughes, to introduce the notion of “misaligned orbits” — cases in which the black holes are askew relative to each other, orbiting in different geometric planes.

Finally, they’re hoping to learn from their model by trying to make it break. Could it work at a mass ratio of 2 or lower? Field and Khanna want to find out. “One gains confidence in an approximation method when one sees it fail,” said Richard Price, a physicist at MIT. “When you do an approximation that gets surprisingly good results, you wonder if you are somehow cheating, unconsciously using a result that you shouldn’t have access to.” If Field and Khanna push their model to the breaking point, he added, “then you’d really know that what you are doing is not cheating — that you just have an approximation that works better than you’d expect.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


How Mathematicians Use Homology to Make Sense of Topology




At first, topology can seem like an unusually imprecise branch of mathematics. It’s the study of squishy play-dough shapes capable of bending, stretching and compressing without limit. But topologists do have some restrictions: They cannot create or destroy holes within shapes. (It’s an old joke that topologists can’t tell the difference between a coffee mug and a doughnut, since they both have one hole.) While this might seem like a far cry from the rigors of algebra, a powerful idea called homology helps mathematicians connect these two worlds.

The word “hole” has many meanings in everyday speech — bubbles, rubber bands and bowls all have different kinds of holes. Mathematicians are interested in detecting a specific type of hole, which can be described as a closed and hollow space. A one-dimensional hole looks like a rubber band. The squiggly line that forms a rubber band is closed (unlike a loose piece of string) and hollow (unlike the perimeter of a penny).

Extending this logic, a two-dimensional hole looks like a hollow ball. The kinds of holes mathematicians are looking for — closed and hollow — are found in basketballs, but not bowls or bowling balls.

But mathematics traffics in rigor, and while thinking about holes this way may help point our intuition toward rubber bands and basketballs, it isn’t precise enough to qualify as a mathematical definition. It doesn’t clearly describe holes in higher dimensions, for instance, and you couldn’t program a computer to distinguish closed and hollow spaces.

“There’s not a good definition of a hole,” said Jose Perea of Michigan State University.

So instead, homology infers an object’s holes from its boundaries, a more precise mathematical concept. To study the holes in an object, mathematicians only need information about its boundaries.

The boundary of a shape is the collection of the points on its periphery, and a shape’s boundary is always one dimension lower than the shape itself. For example, the boundary of a one-dimensional line segment consists of the two points on either end. (Points are considered zero-dimensional.) The boundary of a solid triangle is the hollow triangle, which consists of one-dimensional edges. Similarly, the solid pyramid is bounded by a hollow pyramid.

If you stick two line segments together, the boundary points where they meet disappear. The boundary points are like the edge of a cliff — they are close to falling off the line. But when you connect the lines, the points that were on the edges are now securely in the center. Separately, the two lines had four total boundary points, but when they are stuck together, the resulting shape only has two boundary points.

If you can attach a third edge and close off the structure, creating a hollow triangle, then the boundary points disappear entirely. Each boundary point of the component edges cancels with another, and the hollow triangle is left with no boundary. So whenever a collection of lines forms a loop, the boundaries cancel.

Loops circle back on themselves, enclosing a central region. But the loop only forms a hole if the central region is hollow, as with a rubber band. A circle drawn on a paper forms a loop, but it is not a hole because the center is filled in. Loops that enclose a solid region — the non-hole kind — are the boundary of that two-dimensional region.

Therefore, holes have two important rigorous features. First, a hole has no boundary, because it forms a closed shape. And second, a hole is not the boundary of something else, because the hole itself must be hollow.

This definition can extend to higher dimensions. A two-dimensional solid triangle is bounded by three edges. If you attach several triangles together, some boundary edges disappear. When four triangles are arranged into a pyramid, each of the edges cancels with another one. So the walls of a pyramid have no boundary. If that pyramid is hollow — that is, it is not the boundary of a three-dimensional solid block — then it forms a two-dimensional hole.

To find all the types of holes within a particular topological shape, mathematicians build something called a chain complex, which forms the scaffolding of homology.

Many topological shapes can be built by gluing together pieces of different dimensions. The chain complex is a diagram that gives the assembly instructions for a shape. Individual pieces of the shape are grouped by dimension and then arranged hierarchically: The first level contains all the points, the next level contains all the lines, and so on. (There’s also an empty zeroth level, which simply serves as a foundation.) Each level is connected to the one below it by arrows, which indicate how they are glued together. For example, a solid triangle is linked to the three edges that form its boundary.

Mathematicians extract a shape’s homology from its chain complex, which provides structured data about the shape’s component parts and their boundaries — exactly what you need to describe holes in every dimension. When you use the chain complex, the processes for finding a 10-dimensional hole and a one-dimensional hole are nearly identical (except that one is much harder to visualize than the other).

The definition of homology is rigid enough that a computer can use it to find and count holes, which helps establish the rigor typically required in mathematics. It also allows researchers to use homology for an increasingly popular pursuit: analyzing data.

That’s because data can be visualized as points floating in space. These data points can represent the locations of physical objects, such as sensors, or positions in an abstract space, such as a description of food preferences, with nearby points indicating people who have a similar palate.

To form shapes from data, mathematicians draw lines between neighboring points. When three points are close together, they are filled in to form a solid triangle. When larger numbers of points are clustered together, they form more complicated and higher-dimensional shapes. Filling in the data points gives them texture and volume — it creates an image from the dots.

Homology translates this world of vague shapes into the rigorous world of algebra, a branch of mathematics that studies particular numerical structures and symmetries. Mathematicians study the properties of these algebraic structures in a field known as homological algebra. From the algebra they indirectly learn information about the original topological shape of the data. Homology comes in many varieties, all of which connect with algebra.

“Homology is a familiar construction. We have a lot of algebraic things we know about it,” said Maggie Miller of the Massachusetts Institute of Technology.

The information provided by homology even accounts for the imprecision of data: If the data shifts just slightly, the numbers of holes should stay the same. And when large amounts of data are processed, the holes can reveal important features. For example, loops in time-varying data can indicate periodicity. Holes in other dimensions can show clusters and voids in the data.

“There’s a real impetus to have methods that are robust and that are pulling out qualitative features,” said Robert Ghrist of the University of Pennsylvania. “That’s what homology gives you.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


DNA’s Histone Spools Hint at How Complex Cells Evolved




Molecular biology has something in common with kite-flying competitions. At the latter, all eyes are on the colorful, elaborate, wildly kinetic constructions darting through the sky. Nobody looks at the humble reels or spools on which the kite strings are wound, even though the aerial performances depend on how skillfully those reels are handled. In the biology of complex cells, or eukaryotes, the ballet of molecules that transcribe and translate genomic DNA into proteins holds centerstage, but that dance would be impossible without the underappreciated work of histone proteins gathering up the DNA into neat bundles and unpacking just enough of it when needed.

Histones, as linchpins of the apparatus for gene regulation, play a role in almost every function of eukaryotic cells. “In order to get complex, you have to have genome complexity, and evolve new gene families, and you have to have a cell cycle,” explained William Martin, an evolutionary biologist and biochemist at Heinrich Heine University in Germany. “And what’s in the middle of all this? Managing your DNA.”

New work on the structure and function of histones in ancient, simple cells has now made the longstanding, central importance of these proteins to gene regulation even clearer. Billions of years ago, the cells called archaea were already using histones much like our own to manage their DNA — but they did so with looser rules and much more variety. From those similarities and differences, researchers are gleaning new insights, not only into how the histones helped to shape the origins of complex life, but also into how variants of histones affect our own health today. At the same time, though, new studies of histones in an unusual group of viruses are complicating the answers about where our histones really came from.

Dealing With Too Much DNA

Eukaryotes arose about 2 billion years ago, when a bacterium that could metabolize oxygen for energy took up residence inside an archaeal cell. That symbiotic partnership was revolutionary because energy production from that proto-mitochondrion suddenly made expressing genes much more metabolically affordable, Martin argues. The new eukaryotes suddenly had free rein to expand the size and diversity of their genomes and to conduct myriad evolutionary experiments, laying the foundation for the countless eukaryotic innovations seen in life today. “Eukaryotes are an archaeal genetic apparatus that survives with the help of bacterial energy metabolism,” Martin said.

But the early eukaryotes went through serious growing pains as their genomes expanded: The larger genome brought new problems stemming from the need to manage an increasingly unwieldy string of DNA. That DNA had to be accessible to the cell’s machinery for transcribing and replicating it without getting tangled up in a hopeless spaghetti ball.

The DNA also sometimes needed to be compact, both to help regulate transcription and regulation, and to separate the identical copies of DNA during cell division. And one danger of careless compaction is that DNA strands can irreversibly bind together if the backbone of one interacts with the groove of another, rendering the DNA useless.

Bacteria have a solution for this that involves a variety of proteins jointly “supercoiling” the cells’ relatively limited libraries of DNA. But eukaryotes’ DNA management solution is to use histone proteins, which have a unique ability to wrap DNA around themselves rather than just sticking to it. The four primary histones of eukaryotes — H2A, H2B, H3 and H4 — assemble into octamers with two copies of each. These octamers, called nucleosomes, are the basic units of eukaryotic DNA packaging.

By curving the DNA around the nucleosome, the histones prevent it from clumping together and keep it functional. It’s an ingenious solution — but eukaryotes didn’t invent it entirely on their own.

Back in the 1980s, when the cellular and molecular biologist Kathleen Sandman was a postdoc at Ohio State University, she and her adviser, John Reeve, identified and sequenced the first known histones in archaea. They showed how the four principal eukaryotic histones were related to each other and to the archaeal histones. Their work provided the early evidence that in the original endosymbiotic event that led to eukaryotes, the host was likely to have been an archaeal cell.

But it would be a teleological mistake to think that archaeal histones were just waiting for the arrival of eukaryotes and the chance to enlarge their genomes. “A lot of these early hypotheses looked at histones in terms of their ability to allow the cell to expand its genome. But that doesn’t really tell you why they were there in the first place,” said Siavash Kurdistani, a biochemist at the University of California, Los Angeles.

As a first step toward those answers, Sandman joined forces several years ago with the structural biologist Karolin Luger, who solved the structure of the eukaryotic nucleosome in 1997. Together, they worked out the crystallized structure of the archaeal nucleosome, which they published with colleagues in 2017. They found that the archaeal nucleosomes are “uncannily similar” in structure to eukaryotic nucleosomes, Luger said — despite the marked differences in their peptide sequences.

Archaeal nucleosomes had already “figured out how to bind and bend DNA in this beautiful arc,” said Luger, now a Howard Hughes Medical Institute investigator at the University of Colorado, Boulder. But the difference between the eukaryotic and archaeal nucleosomes is that the crystal structure of the archaeal nucleosome seemed to form looser, Slinky-like assemblies of varying sizes.

In a paper in eLife published in March, Luger, her postdoc Samuel Bowerman, and Jeff Wereszczynski of the Illinois Institute of Technology followed up on the 2017 paper. They used cryo-electron microscopy to solve the structure of the archaeal nucleosome in a state more representative of a live cell. Their observations confirmed that the structures of archaeal nucleosomes are less fixed. Eukaryotic nucleosomes are always stably wrapped by about 147 base pairs of DNA, and always consist of just eight histones. (For eukaryotic nucleosomes, “the buck stops at eight,” Luger said.) Their equivalents in archaea wind up between 60 and 600 base pairs. These “archaeasomes” sometimes hold as few as three histone dimers, but the largest ones consist of as many as 15 dimers.

They also found that unlike the tight eukaryotic nucleosomes, the Slinky-like archaeasomes flop open stochastically, like clamshells. The researchers suggested that this arrangement simplifies gene expression for the archaea, because unlike eukaryotes, they don’t need any energetically expensive supplemental proteins to help unwind DNA from the histones to make them available for transcription.

That’s why Tobias Warnecke, who studies archaeal histones at Imperial College London, thinks that “there’s something special that must have happened at the dawn of eukaryotes, where we transition from just having simple histones … to having octameric nucleosomes. And they seem to be doing something qualitatively different.”

What that is, however, is still a mystery. In archaeal species, there are “quite a few that have histones, and there are other species that don’t have histones. And even those that do have histones vary quite a lot,” Warnecke said. Last December, he published a paper showing that there are diverse variants of histone proteins with different functions. The histone-DNA complexes vary in their stability and affinity for DNA. But they are not as stably or regularly organized as eukaryotic nucleosomes.

As puzzling as the diversity of archaeal histones is, it provides an opportunity to understand the different possible ways of building systems of gene expression. That’s something we cannot glean from the relative “boringness” of eukaryotes, Warnecke says: Through understanding the combinatorics of archaeal systems, “we can also figure out what’s special about eukaryotic systems.” The variety of different histone types and configurations in archaea may also help us deduce what they might have been doing before their role in gene regulation solidified.

A Protective Role for Histones

Because archaea are relatively simple prokaryotes with small genomes, “I don’t think that the original role of histones was to control gene expression, or at least not in a manner that we are used to from eukaryotes,” Warnecke said. Instead, he hypothesizes that histones might have protected the genome from damage.

Archaea often live in extreme environments, like hot springs and volcanic vents on the seafloor, characterized by high temperatures, high pressures, high salinity, high acidity or other threats. Stabilizing their DNA with histones may make it harder for the DNA strands to melt in those extreme conditions. Histones also might protect archaea against invaders, such as phages or transposable elements, which would find it harder to integrate into the genome when it’s wrapped around the proteins.

Kurdistani agrees. “If you were studying archaea 2 billion years ago, genome compaction and gene regulation are not the first things that would come to mind when you are thinking about histones,” he said. In fact, he has tentatively speculated about a different kind of chemical protection that histones might have offered the archaea.

Last July, Kurdistani’s team reported that in yeast nucleosomes, there is a catalytic site at the interface of two histone H3 proteins that can bind and electrochemically reduce copper. To unpack the evolutionary significance of this, Kurdistani goes back to the massive increase in oxygen on Earth, the Great Oxidation Event, that occurred around the time that eukaryotes first evolved more than 2 billion years ago. Higher oxygen levels must have caused a global oxidation of metals like copper and iron, which are critical for biochemistry (although toxic in excess). Once oxidized, the metals would have become less available to cells, so any cells that kept the metals in reduced form would have had an advantage.

During the Great Oxidation Event, the ability to reduce copper would have been “an extremely valuable commodity,” Kurdistani said. It might have been particularly attractive to the bacteria that were forerunners of mitochondria, since cytochrome c oxidase, the last enzyme in the chain of reactions that mitochondria use to produce energy, requires copper to function.

Because archaea live in extreme environments, they might have found ways to generate and handle reduced copper without being killed by it long before the Great Oxidation Event. If so, proto-mitochondria might have invaded archaeal hosts to steal their reduced copper, Kurdistani suggests.

The hypothesis is intriguing because it could explain why the eukaryotes appeared when oxygen levels went up in the atmosphere. “There was 1.5 billion years of life before that, and no sign of eukaryotes,” Kurdistani said. “So the idea that oxygen drove the formation of the first eukaryotic cell, to me, should be central to any hypotheses that try to come up with why these features developed.”

Kurdistani’s conjecture also suggests an alternative hypothesis for why eukaryotic genomes got so big. The histones’ copper-reducing activity only occurs at the interface of the two H3 histones inside an assembled nucleosome wrapped with DNA. “I think there’s a distinct possibility that the cell wanted more histones. And the only way to do that was to expand this DNA repertoire,” Kurdistani said. With more DNA, cells could wrap more nucleosomes and enable the histones to reduce more copper, which would support more mitochondrial activity. “It wasn’t just that histones allowed for more DNA, but more DNA allowed for more histones,” he said.

“One of the neat things about this is that copper is very dangerous because it will break DNA,” said Steven Henikoff, a chromatin biologist and HHMI investigator at the Fred Hutchinson Cancer Research Center in Seattle. “Here’s a place where you have the active form of copper being made, and it’s right next to the DNA, but it doesn’t break the DNA because, presumably, it’s in a tightly packaged form,” he said. By wrapping the DNA, the nucleosomes keep the DNA safely out of the way.

The hypothesis potentially explains aspects of how the architecture of the eukaryotic genome evolved, but it has met with some skepticism. The key outstanding question is whether archaeal histones have the same copper-reducing ability that some eukaryotic ones do. Kurdistani is investigating this now.

The bottom line is that we still don’t definitively know what functions histones served in the archaea. But even so, “the fact that you see them conserved over long distances strongly suggests that they are doing something distinct and important,” Warnecke said. “We just need to find out what it is.”

Histones Are Still Evolving

Although the complex eukaryotic histone apparatus has not changed much since its origin about a billion years ago, it hasn’t been totally frozen. In 2018, a team at the Fred Hutchinson Cancer Research Center reported that a set of short histone variants called H2A.B is evolving rapidly. The pace of the changes is a sure sign of an “arms race” between genes vying for control over regulatory resources. It wasn’t initially clear to the researchers what the genetic conflict was about, but through a series of elegant crossbreeding experiments in mice, they eventually showed that the H2A.B variants dictated the survival and growth rate of embryos, as reported in December in PLOS Biology.

The findings suggested that paternal and maternal versions of the histone variants are mediating a conflict over how to allocate resources to the offspring during pregnancy. They are rare examples of parental-effect genes — ones that don’t directly affect the individual carrying them, but instead strongly affect the individual’s offspring.

The H2A.B variants arose with the first mammals, when the evolution of in utero development rewrote the “contract” for parental investment. Mothers had always invested a lot of resources in their eggs, but mammalian mothers also suddenly became responsible for the early development of their progeny. That set up a conflict: Paternal genes in the embryo had nothing to lose by demanding resources aggressively, while the maternal genes benefited from moderating the burden to spare the mother and let her live to breed another day.

“That negotiation is still ongoing,” said Harmit Malik, an HHMI investigator at the Fred Hutchinson Cancer Research Center who studies genetic conflicts. Exactly how the histones affect the growth and viability of offspring is still not completely understood, but Antoine Molaro, the postdoctoral fellow who led the work and who now leads his own research group at the University of Clermont Auvergne in France, is investigating it.

Some histone variants may cause health problems, too. In January, Molaro, Malik, Henikoff and their colleagues reported that short H2A histone variants are implicated in some cancers: More than half of diffuse large B cell lymphomas carry mutations in them. Other histone variants are associated with neurodegenerative diseases.

But little is yet understood about how a single copy of a histone variant can produce such dramatic disease effects. The obvious hypothesis is that the variants affect the stability of nucleosomes and disrupt their signaling functions, changing gene expression in a way that alters cell physiology. But if histones can act as enzymes, then Kurdistani suggests another possibility: The variants may alter enzymatic activity inside cells.

An Alternative Viral Origin?

Despite the decades-old evidence from Sandman and others that eukaryotic histones evolved from archaeal histones, some intriguing recent work has unexpectedly opened the door to an alternative theory about their origins. According to a paper published on April 29 in Nature Structural & Molecular Biology, giant viruses of the Marseilleviridae family have viral histones that are recognizably related to the four main eukaryotic histones. The only difference is that in the viral versions, the histones that routinely pair up within the octamer (H2A with H2B, and H3 with H4) in eukaryotes are already fused into doublets. The fused viral histones form structures that are “virtually identical to canonical eukaryotic nucleosomes,” according to the paper’s authors.

Luger’s team posted a preprint on about viral histones the same day, showing that in the cytoplasm of infected cells, viral histones stay near the “factories” that produce new viral particles.

“Here’s the thing that is really compelling,” said Henikoff, who was among the authors on the new Nature Structural & Molecular Biology paper. “All of the histone variants turn out to be derived from a common ancestor that was shared between eukaryotes and giant viruses. By standard phylogenetic criteria, these are a sister group to eukaryotes.”

It makes a compelling case that this common ancestor is where the eukaryotic histones came from, he says. A “proto-eukaryote” that had histone doublets might have been ancestral to both the giant viruses and eukaryotes and could have passed the proteins along to both lines of organisms a very long time ago.

Warnecke, however, is skeptical about inferring phylogenetic relationships from viral sequences, which are notoriously mutable. As he explained in an email to Quanta, reasons other than shared ancestry might explain how the histones ended up in both lineages. In addition, the idea would require that the histone doublets later “unfused” into the H2A, H2B, H3 and H4 histones, because there are no doublets of those histones in extant eukaryotes. “How and why that would have happened is unclear,” he wrote.

Although Warnecke is not convinced that the viral histones tell us much about the origin of eukaryotic histones, he is fascinated by their possible functions. One possibility is that they help to compact the viral DNA; another idea is that they could be disguising the viral DNA from the host’s defenses.

Histones have had myriad roles since the dawn of time. But it was really in the eukaryotes that they became the linchpins for complex life and countless evolutionary innovations. That’s why Martin calls the histone “a basic building block that never could realize its full potential without the help of mitochondria.”

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading
Aviation5 days ago

JetBlue Hits Back At Eastern Airlines On Ecuador Flights

Blockchain5 days ago

“Privacy is a ‘Privilege’ that Users Ought to Cherish”: Elena Nadoliksi

AI2 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Blockchain23 hours ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

SaaS5 days ago

Energy3 days ago

ONE Gas to Participate in American Gas Association Financial Forum

Blockchain2 days ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

SaaS5 days ago

Blockchain4 days ago

Yieldly announces IDO

Blockchain5 days ago

Opimas estimates that over US$190 billion worth of Bitcoin is currently at risk due to subpar safekeeping

Esports3 days ago

Pokémon Go Special Weekend announced, features global partners like Verizon, 7-Eleven Mexico, and Yoshinoya

Fintech3 days ago

Credit Karma Launches Instant Karma Rewards

Blockchain2 days ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

SaaS5 days ago

Esports2 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

Business Insider3 days ago

Bella Aurora launches its first treatment for white patches on the skin

Esports1 day ago

‘Destroy Sandcastles’ in Fortnite Locations Explained

Esports4 days ago

5 Best Mid Laners in League of Legends Patch 11.10

Cyber Security4 days ago

Top Tips On Why And How To Get A Cyber Security Degree ?

Esports3 days ago

How to download PUBG Mobile’s patch 1.4 update