Connect with us

Quantum

Physicists Chip Away at a Mystery: Why Does Glass Exist?

Avatar

Published

on

In 2008, Miguel Ramos read in the newspaper that 110-million-year-old amber bearing pristine Mesozoic insects had been discovered a few hours’ drive from Madrid, where he lived. A physicist who specializes in glass, Ramos had wanted for years to get his hands on ancient amber. He contacted the paleontologists working at the site, who invited him to visit.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop­ments and trends in mathe­matics and the physical and life sciences.

“They provided me with the clear samples that are not good for them,” he said. “They have no interesting insects or whatever … but they are perfect for me.”

Ramos spent the next several years intermittently working on measurements of the ancient glass. He hoped that the fossilized tree resin, after aging for so long, might approach a hypothetical form of matter known as ideal glass.

For decades, physicists have dreamed of this perfect amorphous solid. They desire ideal glass not so much for its own sake (though it would have unique, useful properties) but because its existence would solve a deep mystery. It’s the mystery posed by every window and mirror, every piece of plastic and hard candy, and even the cytoplasm that fills every cell. All of these materials are technically glass, for glass is anything that’s solid and rigid but made of disordered molecules like those in a liquid. Glass is a liquid in suspended animation, a liquid whose molecules curiously cannot flow. Ideal glass, if it exists, would tell us why.

Samples of amber in Ramos’s lab.Photograph: James Rajotte/Quanta Magazine

Inconveniently, ideal glass would take so long to form that it may not have done so in all of cosmic history. Physicists can only seek indirect evidence that, given unlimited time, it would. Ramos, an experimental physicist at the Autonomous University of Madrid, hoped that after 110 million years of aging, the Spanish amber might have started to show glimmers of perfection. If so, he would know what the molecules in ordinary glass are really doing when they appear to do nothing.

Ramos’s amber measurements are part of a surge of interest in ideal glass. In the past few years, new methods of making glass and simulating it on computers have led to unexpected progress. Major clues have emerged about the nature of ideal glass and its connection to ordinary glass. “These studies provide renewed support for the hypothesis of the existence of an ideal-glass state,” said Ludovic Berthier, a physicist at the University of Montpellier who was centrally involved in the recent computer simulations.

But the emerging picture of ideal glass only makes sense if we set aside one piece of evidence.

“Indeed,” Berthier said, “the amber work stands out as difficult to rationalize.”

The Paradox of Glass

When you cool a liquid, it will either crystallize or harden into glass. Which of the two happens depends on the substance and on the subtleties of the process that glassblowers have learned through trial and error over thousands of years. “Avoiding crystallization is a dark art,” said Paddy Royall, a glass physicist at the University of Bristol in the United Kingdom.

The two options differ greatly.

Crystallization is a dramatic switch from the liquid phase, in which molecules are disordered and free flowing, to the crystal phase, in which molecules are locked in a regular, repeating pattern. Water freezes into ice at zero degrees Celsius, for instance, because the H2O molecules stop jiggling around just enough at that temperature to feel each other’s forces and fall into lockstep.

Other liquids, when cooled, more easily become glass. Silica, for example—window glass—starts as a molten liquid well above 1,000 degrees Celsius; as it cools, its disordered molecules contract slightly, crowding a bit closer together, which makes the liquid increasingly viscous. Eventually, the molecules stop moving altogether. In this gradual glass transition, the molecules don’t reorganize. They simply grind to a halt.

Illustration: Lucy Reading-Ikkanda/Quanta Magazine

Exactly why the cooling liquid hardens remains unknown. If the molecules in glass were simply too cold to flow, it should still be possible to squish them into new arrangements. But glass doesn’t squish; its jumbled molecules are truly rigid, despite looking the same as molecules in a liquid. “Liquid and glass have the same structure, but behave differently,” said Camille Scalliet, a glass theorist at the University of Cambridge. “Understanding that is the main question.”

Advertisement

A clue came in 1948, when a young chemist named Walter Kauzmann noticed what became known as the entropy crisis, a glassy paradox that later researchers realized ideal glass could resolve.

Kauzmann knew that the more slowly you cool a liquid, the more you can cool it before it transitions into glass. And slower-formed glass ends up denser and more stable, because its molecules had longer to shuffle around (while the liquid was still viscous) and find tighter, lower-energy arrangements. Measurements indicated a corresponding reduction in the entropy, or disorder, of the slower-formed glass—fewer ways its molecules could be arranged with the same low energy.

Extrapolating the trend, Kauzmann realized that if you could cool a liquid slowly enough, you could cool it all the way down to a temperature now known as the Kauzmann temperature before it fully hardened. At that temperature, the resulting glass would have an entropy as low as that of a crystal. But crystals are neat, orderly structures. How could glass, disordered by definition, possess equal order?

No ordinary glass could, which implied that something special must happen at the Kauzmann temperature. Crisis would be avoided if a liquid, upon reaching that temperature, attained the ideal-glass state—the densest possible random packing of molecules. Such a state would exhibit “long-range amorphous order,” where each molecule feels and affects the position of every other, so that in order to move, they must move as one. The hidden long-range order of this putative state could rival the more obvious orderliness of a crystal. “That observation right there was at the heart of why people thought there should be an ideal glass,” said Mark Ediger, a chemical physicist at the University of Wisconsin, Madison.

According to this theory, first advanced by Julian Gibbs and Edmund DiMarzio in 1958, ideal glass is a true phase of matter, akin to the liquid and crystal phases. The transition to this phase just takes too long, requiring too slow a cooling process, for scientists to ever see. The ideal-glass transition is “masked,” said Daniel Stein, a condensed matter physicist at New York University, by the liquid becoming “so viscous that everything is arrested.”

“It’s sort of like looking through a glass darkly,” Stein said. “We can’t get to [ideal glass] or see it. But we can theoretically try to create accurate models of what’s going on there.”

A New Glass

Unexpected help has come from experiments. There was never any hope of forming ideal glass by cooling a liquid, the glassmaking method humans have used for millennia. You’d have to cool a liquid impossibly slowly—perhaps even infinitely slowly—to keep it from hardening before it hit the Kauzmann temperature. But in 2007, Ediger, the Wisconsin physicist, developed a new method of glassmaking. “We figured out there was another way to make glasses that are high density and close to the ideal-glass state by a completely different route,” he said.

Ediger and his team discovered that they could create “ultra-stable glasses” that exist in a state somewhere between ordinary and ideal. Using a method called vapor deposition, they dropped molecules one by one onto a surface as if they were playing Tetris, allowing each molecule to settle into its snuggest fit in the forming glass before the next molecule came down. The resulting glass was denser, more stable, and lower in entropy than all of the glasses throughout human history. “These materials have the properties that you would expect if you took a liquid and cooled it over the course of a million years,” Ediger said.

Another property of ultra-stable glass would eventually reveal the most promising road map to ideal glass.

Advertisement

Two groups, one of them led by Miguel Ramos in Madrid, identified that property in 2014, when they found that ultra-stable glass departs from a universal characteristic of all ordinary glass.

Vapor-deposited glass can have different properties depending on the temperature at which it is created. In this sample, researchers maintained a temperature gradient across the sample, which led to the rainbow effect. The ultrastable glass is toward the middle of the sample.Photograph: Diane Walters/University of Wisconsin-Madison

Physicists have known for decades that ultra-cold glass has a high heat capacity—the amount of heat needed to raise its temperature. Glass can take way more heat than a crystal can near absolute zero, with a heat capacity that’s directly proportional to the temperature.

Theorists including Phil Anderson, the revered Nobel Prize-winning condensed matter physicist, suggested an explanation in the early 1970s. They argued that glass contains many “two-level systems,” little clusters of atoms or molecules that can slip back and forth between two alternative, equally stable configurations. “You can imagine a whole bunch of atoms kind of shifting from one configuration to a very slightly different configuration,” said Frances Hellman of the University of California, Berkeley, “which just doesn’t exist in a crystalline material.”

Although the atoms or molecules are too boxed in by their neighbors to do much switching on their own, at room temperature, heat activates the two-level systems, providing the atoms with the energy they need to shuffle around. This activity diminishes as the glass’s temperature drops. But near absolute zero, quantum effects become important: Groups of atoms in the glass can quantum mechanically “tunnel” between the alternative configurations, passing right through any obstacles, and even occupy both levels of the two-level system at once. The tunneling absorbs a lot of heat, producing glass’s characteristic high heat capacity.

Several years after Ediger figured out how to make ultra-stable glass, Hellman’s group at Berkeley and Ramos’ in Madrid independently set out to study whether it might depart from that universal heat capacity near absolute zero. In their respective experiments, they probed the low-temperature properties of ultra-stable silicon and ultra-stable indomethacin (a chemical that’s also used as an anti-inflammatory drug). Sure enough, they found that both glasses had far lower heat capacity than usual near absolute zero, in line with a crystal’s. This suggested that ultra-stable glass has fewer two-level systems to tunnel between. The molecules are in especially snug configurations with few competitors.

Ramos cools amber down to temperatures close to absolute zero to test how closely it approaches the state of ideal glass.Photograph: James Rajotte/Quanta Magazine
Advertisement

If ultra-stable glass’s exceptionally low heat capacity really does come from having fewer two-level systems, then ideal glass naturally corresponds to the state with no two-level systems at all. “It’s just perfectly, somehow, positioned where all the atoms are disordered—it doesn’t have a crystal structure—but there’s nothing moving at all,” said David Reichman, a theorist at Columbia University.

Furthermore, the drive toward this state of perfect long-range amorphous order, where each molecule affects the positions of all others, could be what causes liquids to harden into the glass we see (and see through) all around us.

In this emerging picture, when a liquid becomes a glass, it’s actually attempting to transition to the ideal-glass phase, drawn by a fundamental pull toward long-range order. The ideal glass is the endpoint, Royall said, but as the molecules try to crowd closer together, they get stuck; the increasing viscosity prevents the system from ever reaching the desired state.

Recently, groundbreaking computer simulations were used to test these ideas. Simulating ultra-stable glass on a computer used to be infeasible because of the extraordinary computing time required for the simulated molecules to crowd together. Two years ago, though, Berthier found a trick that allowed him to speed up the process by a factor of 1 trillion. His algorithm picks two particles at random and swaps their positions. These shake-ups help the simulated liquid stay unstuck, allowing molecules to settle into snugger fits—just as the ability to swap two ill-fitting shapes would help in Tetris.

In a paper that’s under review for publication in Physical Review Letters, Berthier, Scalliet, Reichman and two co-authors reported that the more stable the simulated glass, the fewer two-level systems it has. As with Hellman’s and Ramos’ heat capacity measurements, the computer simulations suggest that two-level systems—competing configurations of groups of molecules—are the source of glass’s entropy. The fewer of these alternative states there are, the more stability and long-range order an amorphous solid has, and the closer it is to ideal.

The theorists Vassiliy Lubchenko of the University of Houston and Peter Wolynes of Rice University suggested back in 2007 that ideal glass should have no two-level systems. “I’m quite happy with Berthier’s result,” Wolynes said by email.

The Amber Anomaly

But then there’s that amber.

Ramos and his collaborators published their comparisons of old and “rejuvenated” samples of the yellow glass in Physical Review Letters in 2014. They found that the 110-million-year-old amber had grown about 2 percent denser, in line with ultra-stable glass. This should suggest that the amber had indeed stabilized over time, as little groups of molecules slipped, one by one, into lower-energy arrangements.

But when the Madrid team cooled the ancient glass nearly to absolute zero and measured its heat capacity, the results told a different story. The aged amber had the same high heat capacity as new amber—and all other ordinary glass. Its molecules seemed to be tunneling between just as many two-level systems as usual.

Why didn’t the number of two-level systems drop over time as the amber stabilized and became denser? The findings don’t fit.

“I really like the experiments on amber, but making an amber glass is sort of a messy process,” said Ediger, the originator of the vapor-deposition method. “It’s basically tree sap that over time chemically changes and solidifies as well as ages.” He thinks impurities in the Spanish amber might have sullied the heat capacity measurements.

Researchers plan to do further experiments on amber, as well as lab-made and simulated glass, hoping to uncover more details of two-level systems and to get closer to the putative ideal state. Reichman noted that it may never be possible to prove its existence with complete certainty. “Maybe one day we will know, at least on the computer, how to precisely pack particles in a way that would be the ideal glass we are looking for,” he said. “But we would then have to wait a very long time—too long—to see if it remains stable.”

Editor’s Note: Ludovic Berthier and David Reichman have received funding from the Simons Foundation, which also supports Quanta, an editorially independent publication. Simons Foundation funding plays no role in their coverage.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.


Read more: https://www.wired.com/story/physicists-chip-away-at-a-mystery-why-does-glass-exist/

Quantum

What Is a Particle?

Avatar

Published

on

Given that everything in the universe reduces to particles, a question presents itself: What are particles?

The easy answer quickly shows itself to be unsatisfying. Namely, electrons, photons, quarks and other “fundamental” particles supposedly lack substructure or physical extent. “We basically think of a particle as a pointlike object,” said Mary Gaillard, a particle theorist at the University of California, Berkeley who predicted the masses of two types of quarks in the 1970s. And yet particles have distinct traits, such as charge and mass. How can a dimensionless point bear weight?

“We say they are ‘fundamental,’” said Xiao-Gang Wen, a theoretical physicist at the Massachusetts Institute of Technology. “But that’s just a [way to say] to students, ‘Don’t ask! I don’t know the answer. It’s fundamental; don’t ask anymore.’”

With any other object, the object’s properties depend on its physical makeup — ultimately, its constituent particles. But those particles’ properties derive not from constituents of their own but from mathematical patterns. As points of contact between mathematics and reality, particles straddle both worlds with an uncertain footing.

When I recently asked a dozen particle physicists what a particle is, they gave remarkably diverse descriptions. They emphasized that their answers don’t conflict so much as capture different facets of the truth. They also described two major research thrusts in fundamental physics today that are pursuing a more satisfying, all-encompassing picture of particles.

“‘What is a particle?’ indeed is a very interesting question,” said Wen. “Nowadays there is progress in this direction. I should not say there’s a unified point of view, but there’s several different points of view, and all look interesting.”

The quest to understand nature’s fundamental building blocks began with the ancient Greek philosopher Democritus’s assertion that such things exist. Two millennia later, Isaac Newton and Christiaan Huygens debated whether light is made of particles or waves. The discovery of quantum mechanics some 250 years after that proved both luminaries right: Light comes in individual packets of energy known as photons, which behave as both particles and waves.

Wave-particle duality turned out to be a symptom of a deep strangeness. Quantum mechanics revealed to its discoverers in the 1920s that photons and other quantum objects are best described not as particles or waves but by abstract “wave functions” — evolving mathematical functions that indicate a particle’s probability of having various properties. The wave function representing an electron, say, is spatially spread out, so that the electron has possible locations rather than a definite one. But somehow, strangely, when you stick a detector in the scene and measure the electron’s location, its wave function suddenly “collapses” to a point, and the particle clicks at that position in the detector.

A particle is thus a collapsed wave function. But what in the world does that mean? Why does observation cause a distended mathematical function to collapse and a concrete particle to appear? And what decides the measurement’s outcome? Nearly a century later, physicists have no idea.

The picture soon got even stranger. In the 1930s, physicists realized that the wave functions of many individual photons collectively behave like a single wave propagating through conjoined electric and magnetic fields — exactly the classical picture of light discovered in the 19th century by James Clerk Maxwell. These researchers found that they could “quantize” classical field theory, restricting fields so that they could only oscillate in discrete amounts known as the “quanta” of the fields. In addition to  photons — the quanta of light — Paul Dirac and others discovered that the idea could be extrapolated to electrons and everything else: According to quantum field theory, particles are excitations of quantum fields that fill all of space.

In positing the existence of these more fundamental fields, quantum field theory stripped particles of status, characterizing them as mere bits of energy that set fields sloshing. Yet despite the ontological baggage of omnipresent fields, quantum field theory became the lingua franca of particle physics because it allows researchers to calculate with extreme precision what happens when particles interact — particle interactions being, at base level, the way the world is put together.

As physicists discovered more of nature’s particles and their associated fields, a parallel perspective developed. The properties of these particles and fields appeared to follow numerical patterns. By extending these patterns, physicists were able to predict the existence of more particles. “Once you encode the patterns you observe into the mathematics, the mathematics is predictive; it tells you more things you might observe,” explained Helen Quinn, an emeritus particle physicist at Stanford University.
The patterns also suggested a more abstract and potentially deeper perspective on what particles actually are.

Mark Van Raamsdonk remembers the beginning of the first class he took on quantum field theory as a Princeton University graduate student. The professor came in, looked out at the students, and asked, “What is a particle?”

“An irreducible representation of the Poincaré group,” a precocious classmate answered.

Taking the apparently correct definition to be general knowledge, the professor skipped any explanation and launched into an inscrutable series of lectures. “That entire semester I didn’t learn a single thing from the course,” said Van Raamsdonk, who’s now a respected theoretical physicist at the University of British Columbia.

It’s the standard deep answer of people in the know: Particles are “representations” of “symmetry groups,” which are sets of transformations that can be done to objects.

Take, for example, an equilateral triangle. Rotating it by 120 or 240 degrees, or reflecting it across the line from each corner to the midpoint of the opposite side, or doing nothing, all leave the triangle looking the same as before. These six symmetries form a group. The group can be expressed as a set of mathematical matrices — arrays of numbers that, when multiplied by coordinates of an equilateral triangle, return the same coordinates. Such a set of matrices is a “representation” of the symmetry group.

Similarly, electrons, photons and other fundamental particles are objects that essentially stay the same when acted on by a certain group. Namely, particles are representations of the Poincaré group: the group of 10 ways of moving around in the space-time continuum. Objects can shift in three spatial directions or shift in time; they can also rotate in three directions or receive a boost in any of those directions. In 1939, the mathematical physicist Eugene Wigner identified particles as the simplest possible objects that can be shifted, rotated and boosted.

For an object to transform nicely under these 10 Poincaré transformations, he realized, it must have a certain minimal set of properties, and particles have these properties. One is energy. Deep down, energy is simply the property that stays the same when the object shifts in time. Momentum is the property that stays the same as the object moves through space.

A third property is needed to specify how particles change under combinations of spatial rotations and boosts (which, together, are rotations in space-time). This key property is “spin.” At the time of Wigner’s work, physicists already knew particles have spin, a kind of intrinsic angular momentum that determines many aspects of particle behavior, including whether they act like matter (as electrons do) or as a force (like photons). Wigner showed that, deep down, “spin is just a label that particles have because the world has rotations,” said Nima Arkani-Hamed, a particle physicist at the Institute for Advanced Study in Princeton, New Jersey.

Different representations of the Poincaré group are particles with different numbers of spin labels, or degrees of freedom that are affected by rotations. There are, for example, particles with three spin degrees of freedom. These particles rotate in the same way as familiar 3D objects. All matter particles, meanwhile, have two spin degrees of freedom, nicknamed “spin-up” and “spin-down,” which rotate differently. If you rotate an electron by 360 degrees, its state will be inverted, just as an arrow, when moved around a 2D Möbius strip, comes back around pointing the opposite way.

Elementary particles with one and five spin labels also appear in nature. Only a representation of the Poincaré group with four spin labels seems to be missing.

The correspondence between elementary particles and representations is so neat that some physicists — like Van Raamsdonk’s professor — equate them. Others see this as a conflation. “The representation is not the particle; the representation is a way of describing certain properties of the particle,” said Sheldon Glashow, a Nobel Prize-winning particle theorist and professor emeritus at Harvard University and Boston University. “Let us not confuse the two.”

Whether there’s a distinction or not, the relationship between particle physics and group theory grew both richer and more complicated over the course of the 20th century. The discoveries showed that elementary particles don’t just have the minimum set of labels needed to navigate space-time; they have extra, somewhat superfluous labels as well.

Particles with the same energy, momentum and spin behave identically under the 10 Poincaré transformations, but they can differ in other ways. For instance, they can carry different amounts of electric charge. As “the whole particle zoo” (as Quinn put it) was discovered in the mid-20th century, additional distinctions between particles were revealed, necessitating new labels dubbed “color” and “flavor.”

Just as particles are representations of the Poincaré group, theorists came to understand that their extra properties reflect additional ways they can be transformed. But instead of shifting objects in space-time, these new transformations are more abstract; they change particles’ “internal” states, for lack of a better word.

Take the property known as color: In the 1960s, physicists ascertained that quarks, the elementary constituents of atomic nuclei, exist in a probabilistic combination of three possible states, which they nicknamed “red,” “green” and “blue.” These states have nothing to do with actual color or any other perceivable property. It’s the number of labels that matters: Quarks, with their three labels, are representations of a group of transformations called SU(3) consisting of the infinitely many ways of mathematically mixing the three labels.

While particles with color are representations of the symmetry group SU(3), particles with the internal properties of flavor and electric charge are representations of the symmetry groups SU(2) and U(1), respectively. Thus, the Standard Model of particle physics — the quantum field theory of all known elementary particles and their interactions — is often said to represent the symmetry group SU(3) × SU(2) × U(1), consisting of all combinations of the symmetry operations in the three subgroups. (That particles also transform under the Poincaré group is apparently too obvious to even mention.)

The Standard Model reigns half a century after its development. Yet it’s an incomplete description of the universe. Crucially, it’s missing the force of gravity, which quantum field theory can’t fully handle. Albert Einstein’s general theory of relativity separately describes gravity as curves in the space-time fabric. Moreover, the Standard Model’s three-part SU(3) × SU(2) × U(1) structure raises questions. To wit: “Where the hell did all this come from?” as Dimitri Nanopoulos put it. “OK, suppose it works,” continued Nanopoulos, a particle physicist at Texas A&M University who was active during the Standard Model’s early days. “But what is this thing? It cannot be three groups there; I mean, ‘God’ is better than this — God in quotation marks.”

In the 1970s, Glashow, Nanopoulos and others tried fitting the SU(3), SU(2) and U(1) symmetries inside a single, larger group of transformations, the idea being that particles were representations of a single symmetry group at the beginning of the universe. (As symmetries broke, complications set in.) The most natural candidate for such a “grand unified theory” was a symmetry group called SU(5), but experiments soon ruled out that option. Other, less appealing possibilities remain in play.

Researchers placed even higher hopes in string theory: the idea that if you zoomed in enough on particles, you would see not points but one-dimensional vibrating strings. You would also see six extra spatial dimensions, which string theory says are curled up at every point in our familiar 4D space-time fabric. The geometry of the small dimensions determines the properties of strings and thus the macroscopic world. “Internal” symmetries of particles, like the SU(3) operations that transform quarks’ color, obtain physical meaning: These operations map, in the string picture, onto rotations in the small spatial dimensions, just as spin reflects rotations in the large dimensions. “Geometry gives you symmetry gives you particles, and all of this goes together,” Nanopoulos said.

However, if any strings or extra dimensions exist, they’re too small to be detected experimentally. In their absence, other ideas have blossomed. Over the past decade, two approaches in particular have attracted the brightest minds in contemporary fundamental physics. Both approaches refresh the picture of particles yet again.

The first of these research efforts goes by the slogan “it-from-qubit,” which expresses the hypothesis that everything in the universe — all particles, as well as the space-time fabric those particles stud like blueberries in a muffin — arises out of quantum bits of information, or qubits. Qubits are probabilistic combinations of two states, labeled 0 and 1. (Qubits can be stored in physical systems just as bits can be stored in transistors, but you can think of them more abstractly, as information itself.) When there are multiple qubits, their possible states can get tangled up, so that each one’s state depends on the states of all the others. Through these contingencies, a small number of entangled qubits can encode a huge amount of information.

In the it-from-qubit conception of the universe, if you want to understand what particles are, you first have to understand space-time. In 2010, Van Raamsdonk, a member of the it-from-qubit camp, wrote an influential essay boldly declaring what various calculations suggested. He argued that entangled qubits might stitch together the space-time fabric.

Calculations, thought experiments and toy examples going back decades suggest that space-time has “holographic” properties: It’s possible to encode all information about a region of space-time in degrees of freedom in one fewer dimension — often on the region’s surface. “In the last 10 years, we’ve learned a lot more about how this encoding works,” Van Raamsdonk said.

What’s most surprising and fascinating to physicists about this holographic relationship is that space-time is bendy because it includes gravity. But the lower-dimensional system that encodes information about that bendy space-time is a purely quantum system that lacks any sense of curvature, gravity or even geometry. It can be thought of as a system of entangled qubits.

Under the it-from-qubit hypothesis, the properties of space-time — its robustness, its symmetries — essentially come from the way 0s and 1s are braided together. The long-standing quest for a quantum description of gravity becomes a matter of identifying the qubit entanglement pattern that encodes the particular kind of space-time fabric found in the actual universe.

So far, researchers know much more about how this all works in toy universes that have negatively curved, saddle-shaped space-time — mostly because they’re relatively easy to work with. Our universe, by contrast, is positively curved. But researchers have found, to their surprise, that anytime negatively curved space-time pops up like a hologram, particles come along for the ride. That is, whenever a system of qubits holographically encodes a region of space-time, there are always qubit entanglement patterns that correspond to localized bits of energy floating in the higher-dimensional world.

Importantly, algebraic operations on the qubits, when translated in terms of space-time, “behave just like rotations acting on the particles,” Van Raamsdonk said. “You realize there’s this picture being encoded by this nongravitational quantum system. And somehow in that code, if you can decode it, it’s telling you that there are particles in some other space.”

The fact that holographic space-time always has these particle states is “actually one of the most important things that distinguishes these holographic systems from other quantum systems,” he said. “I think nobody really understands the reason why holographic models have this property.”

It’s tempting to picture qubits having some sort of spatial arrangement that creates the holographic universe, just as familiar holograms project from spatial patterns. But in fact, the qubits’ relationships and interdependencies might be far more abstract, with no real physical arrangement at all. “You don’t need to talk about these 0s and 1s living in a particular space,” said Netta Engelhardt, a physicist at MIT who recently won a New Horizons in Physics Prize for calculating the quantum information content of black holes. “You can talk about the abstract existence of 0s and 1s, and how an operator might act on 0s and 1s, and these are all much more abstract mathematical relations.”

There’s clearly more to understand. But if the it-from-qubit picture is right, then particles are holograms, just like space-time. Their truest definition is in terms of qubits.

Another camp of researchers who call themselves “amplitudeologists” seeks to return the spotlight to the particles themselves.

These researchers argue that quantum field theory, the current lingua franca of particle physics, tells far too convoluted a story. Physicists use quantum field theory to calculate essential formulas called scattering amplitudes, some of the most basic calculable features of reality. When particles collide, amplitudes indicate how the particles might morph or scatter. Particle interactions make the world, so the way physicists test their description of the world is to compare their scattering amplitude formulas to the outcomes of particle collisions in experiments such as Europe’s Large Hadron Collider.

Normally, to calculate amplitudes, physicists systematically account for all possible ways colliding ripples might reverberate through the quantum fields that pervade the universe before they produce stable particles that fly away from the crash site. Strangely, calculations involving hundreds of pages of algebra often yield, in the end, a one-line formula. Amplitudeologists argue that the field picture is obscuring simpler mathematical patterns. Arkani-Hamed, a leader of the effort, called quantum fields “a convenient fiction.” “In physics very often we slip into a mistake of reifying a formalism,” he said. “We start slipping into the language of saying that it’s the quantum fields that are real, and particles are excitations. We talk about virtual particles, all this stuff — but it doesn’t go click, click, click in anyone’s detector.”

Amplitudeologists believe that a mathematically simpler and truer picture of particle interactions exists.

In some cases, they’re finding that Wigner’s group theory perspective on particles can be extended to describe interactions as well, without any of the usual rigmarole of quantum fields.

Lance Dixon, a prominent amplitudeologist at the SLAC National Accelerator Laboratory, explained that researchers have used the Poincaré rotations studied by Wigner to directly deduce the “three-point amplitude” — a formula describing one particle splitting into two. They’ve also shown that three-point amplitudes serve as the building blocks of four- and higher-point amplitudes involving more and more particles. These dynamical interactions seemingly build from the ground up out of basic symmetries.

“The coolest thing,” according to Dixon, is that scattering amplitudes involving gravitons, the putative carriers of gravity, turn out to be the square of amplitudes involving gluons, the particles that glue together quarks. We associate gravity with the fabric of space-time itself, while gluons move around in space-time. Yet gravitons and gluons seemingly spring from the same symmetries. “That’s very weird and of course not really understood in quantitative detail because the pictures are so different,” Dixon said.

Arkani-Hamed and his collaborators, meanwhile, have found entirely new mathematical apparatuses that jump straight to the answer, such as the amplituhedron — a geometric object that encodes particle scattering amplitudes in its volume. Gone is the picture of particles colliding in space-time and setting off chain reactions of cause and effect. “We’re trying to find these objects out there in the Platonic world of ideas that give us [causal] properties automatically,” Arkani-Hamed said. “Then we can say, ‘Aha, now I can see why this picture can be interpreted as evolution.’”

It-from-qubit and amplitudeology approach the big questions so differently that it’s hard to say whether the two pictures complement or contradict each other. “At the end of the day, quantum gravity has some mathematical structure, and we’re all chipping away at it,” Engelhardt said. She added that a quantum theory of gravity and space-time will ultimately be needed to answer the question, “What are the fundamental building blocks of the universe on its most fundamental scales?” — a more sophisticated phrasing of my question, “What is a particle?”

In the meantime, Engelhardt said, “‘We don’t know’ is the short answer.”


1: “At the moment that I detect it, it collapses the wave and becomes a particle. … [The particle is] the collapsed wave function.”
—Dimitri Nanopoulos (back to article)

2: “What is a particle from a physicist’s point of view? It’s a quantum excitation of a field. We write particle physics in a math called quantum field theory. In that, there are a bunch of different fields; each field has different properties and excitations, and they are different depending on the properties, and those excitations we can think of as a particle.”
—Helen Quinn (back to article)

3: “Particles are at a very minimum described by irreducible representations of the Poincaré group.”
— Sheldon Glashow

“Ever since the fundamental paper of Wigner on the irreducible representations of the Poincaré group, it has been a (perhaps implicit) definition in physics that an elementary particle ‘is’ an irreducible representation of the group, G, of ‘symmetries of nature.’”
Yuval Ne’eman and Shlomo Sternberg (back to article)

4: “Particles have so many layers.”
—Xiao-Gang Wen (back to article)

5: “What we think of as elementary particles, instead they might be vibrating strings.”
—Mary Gaillard (back to article)

6: “Every particle is a quantized wave. The wave is a deformation of the qubit ocean.”
—Xiao-Gang Wen (back to article)

7: “Particles are what we measure in detectors. … We start slipping into the language of saying that it’s the quantum fields that are real, and particles are excitations. We talk about virtual particles, all this stuff — but it doesn’t go click, click, click in anyone’s detector.”
—Nima Arkani-Hamed (back to article)


Editor’s note: Mark Van Raamsdonk receives funding from the Simons Foundation, which also funds this editorially independent magazine. Simons Foundation funding decisions have no influence on our coverage. More details are available here.

Source: https://www.quantamagazine.org/what-is-a-particle-20201112/

Continue Reading

Quantum

Physicists Pin Down Nuclear Reaction From Moments After the Big Bang

Avatar

Published

on

In a secluded laboratory buried under a mountain in Italy, physicists have re-created a nuclear reaction that happened between two and three minutes after the Big Bang.

Their measurement of the reaction rate, published today in Nature, nails down the most uncertain factor in a sequence of steps known as Big Bang nucleosynthesis that forged the universe’s first atomic nuclei.

Researchers are “over the moon” about the result, according to Ryan Cooke, an astrophysicist at Durham University in the United Kingdom who wasn’t involved in the work. “There’ll be a lot of people who are interested from particle physics, nuclear physics, cosmology and astronomy,” he said.

The reaction involves deuterium, a form of hydrogen consisting of one proton and one neutron that fused within the cosmos’s first three minutes. Most of the deuterium quickly fused into heavier, stabler elements like helium and lithium. But some survived to the present day. “You have a few grams of deuterium in your body, which comes all the way from the Big Bang,” said Brian Fields, an astrophysicist at the University of Illinois, Urbana-Champaign.

The precise amount of deuterium that remains reveals key details about those first minutes, including the density of protons and neutrons and how quickly they became separated by cosmic expansion. Deuterium is “a special super-witness of that epoch,” said Carlo Gustavino, a nuclear astrophysicist at Italy’s National Institute for Nuclear Physics.

But physicists can only deduce those pieces of information if they know the rate at which deuterium fuses with a proton to form the isotope helium-3. It’s this rate that the new measurement by the Laboratory for Underground Nuclear Astrophysics (LUNA) collaboration has pinned down.

The Earliest Probe of the Universe

Deuterium’s creation was the first step in Big Bang nucleosynthesis, a sequence of nuclear reactions that occurred when the cosmos was a super hot but rapidly cooling soup of protons and neutrons.

Starting in the 1940s, nuclear physicists developed a series of interlocking equations describing how various isotopes of hydrogen, helium and lithium assembled as nuclei merged and absorbed protons and neutrons. (Heavier elements were forged much later inside stars.) Researchers have since tested most aspects of the equations by replicating the primordial nuclear reactions in laboratories.

In doing so, they made radical discoveries. The calculations offered some of the first evidence of dark matter in the 1970s. Big Bang nucleosynthesis also enabled physicists to predict the number of different types of neutrinos, which helped drive cosmic expansion.

But for almost a decade now, uncertainty about deuterium’s likelihood of absorbing a proton and turning into helium-3 has fogged up the picture of the universe’s first minutes. Most importantly, the uncertainty has prevented physicists from comparing that picture to what the cosmos looked like 380,000 years later, when the universe cooled enough for electrons to begin orbiting atomic nuclei. This process released radiation called the cosmic microwave background that provides a snapshot of the universe at the time.

Cosmologists want to check whether the density of the cosmos changed from one period to the other as expected based on their models of cosmic evolution. If the two pictures disagree, “that would be a really, really important thing to understand,” Cooke said. Solutions to stubbornly persistent cosmological problems — like the nature of dark matter — could be found in this gap, as could the first signs of exotic new particles. “A lot can happen between a minute or two after the Big Bang and several hundred thousand years after the Big Bang,” Cooke said.

But the all-important deuterium reaction rate that would allow researchers to make these kinds of comparisons is very difficult to measure. “You’re simulating the Big Bang in the lab in a controlled way,” said Fields.

Physicists last attempted a measurement in 1997. Since then, observations of the cosmic microwave background have become increasingly precise, putting pressure on physicists who study Big Bang nucleosynthesis to match that precision — and so allow a comparison of the two epochs.

In 2014, Cooke and co-authors precisely measured the abundance of deuterium in the universe through observations of faraway gas clouds. But to translate this abundance into a precise prediction of the primordial matter density, they needed a much better measure of the deuterium reaction rate.

Confounding the situation further, a purely theoretical estimate for the rate, published in 2016, disagreed with the 1997 laboratory measurement.

“It was a very confused scenario,” said Gustavino, who is a member of the LUNA collaboration. “At this point I became pushy with the collaboration … because LUNA could measure this reaction exactly.”

A Rare Combination

Part of the challenge in measuring how readily deuterium fuses with a proton is that, under laboratory conditions, the reaction doesn’t happen very often. Every second, the LUNA experiment fires 100 trillion protons at a target of deuterium. Only a few a day will fuse.

Adding to the difficulty, cosmic rays that constantly rain down on Earth’s surface can mimic the signal produced by deuterium reactions. “For this reason, we’re in an underground laboratory where, thanks to the rock cover, we can benefit from cosmic silence,” said Francesca Cavanna, who led LUNA’s data collection and analysis along with Sandra Zavatarelli.

Over three years, the scientists took turns spending weeklong shifts in a lab deep inside Italy’s Gran Sasso mountain. “It’s exciting because you really feel you are inside the science,” Cavanna said. As they gradually collected data, pressure mounted from the wider physics community. “There was a lot of anticipation; there was a lot of expectation,” said Marialuisa Aliotta, a team member.

As it turns out, the team’s newly published measurement may come as a disappointment to cosmologists looking for cracks in their model of how the universe works.

Small Steps

The measured rate — which says how quickly deuterium tends to fuse with a proton to form helium-3 across the range of temperatures found in the epoch of primordial nucleosynthesis — landed between the 2016 theoretical prediction and the 1997 measurement. More importantly, when physicists feed this rate into the equations of Big Bang nucleosynthesis, they predict a primordial matter density and a cosmic expansion rate that closely square with observations of the cosmic microwave background 380,000 years later.

“It essentially tells us that the standard model of cosmology is, so far, quite right,” said Aliotta.

That in itself squeezes the gap that next-generation models of the cosmos must fit into. Experts say some theories of dark matter could even be ruled out by the results.

That’s less exciting than evidence in favor of exotic new cosmic ingredients or effects. But in this era of precision astronomy, Aliotta said, scientists proceed “by making small steps.” Fields agreed: “We are constantly trying to do better on the prediction side, the measurement side and the observation side.”

On the horizon is the next generation of cosmic microwave background measurements. Meanwhile, with deuterium’s behavior now better understood, uncertainties in other primordial nuclear reactions and elemental abundances become more pressing.

A longstanding “fly in the Big Bang nucleosynthesis ointment,” according to Fields, is that the matter density calculated from deuterium and the cosmic microwave background predicts that there should be three times more lithium in the universe than we actually observe.

“There are still lots of unknowns,” said Aliotta. “And what the future will bring is going to be very interesting.”

Source: https://www.quantamagazine.org/physicists-pin-down-nuclear-reaction-from-moments-after-the-big-bang-20201111/

Continue Reading

Quantum

Computer Scientists Achieve ‘Crown Jewel’ of Cryptography

Avatar

Published

on

In 2018, Aayush Jain, a graduate student at the University of California, Los Angeles, traveled to Japan to give a talk about a powerful cryptographic tool he and his colleagues were developing. As he detailed the team’s approach to indistinguishability obfuscation (iO for short), one audience member raised his hand in bewilderment.

“But I thought iO doesn’t exist?” he said.

At the time, such skepticism was widespread. Indistinguishability obfuscation, if it could be built, would be able to hide not just collections of data but the inner workings of a computer program itself, creating a sort of cryptographic master tool from which nearly every other cryptographic protocol could be built. It is “one cryptographic primitive to rule them all,” said Boaz Barak of Harvard University. But to many computer scientists, this very power made iO seem too good to be true.

Computer scientists set forth candidate versions of iO starting in 2013. But the intense excitement these constructions generated gradually fizzled out, as other researchers figured out how to break their security. As the attacks piled up, “you could see a lot of negative vibes,” said Yuval Ishai of the Technion in Haifa, Israel. Researchers wondered, he said, “Who will win: the makers or the breakers?”

“There were the people who were the zealots, and they believed in [iO] and kept working on it,” said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley. But as the years went by, she said, “there was less and less of those people.”

Now, Jain — together with Huijia Lin of the University of Washington and Amit Sahai, Jain’s adviser at UCLA — has planted a flag for the makers. In a paper posted online on August 18, the three researchers show for the first time how to build indistinguishability obfuscation using only “standard” security assumptions.

All cryptographic protocols rest on assumptions — some, such as the famous RSA algorithm, depend on the widely held belief that standard computers will never be able to quickly factor the product of two large prime numbers. A cryptographic protocol is only as secure as its assumptions, and previous attempts at iO were built on untested and ultimately shaky foundations. The new protocol, by contrast, depends on security assumptions that have been widely used and studied in the past.

“Barring a really surprising development, these assumptions will stand,” Ishai said.

While the protocol is far from ready to be deployed in real-world applications, from a theoretical standpoint it provides an instant way to build an array of cryptographic tools that were previously out of reach. For instance, it enables the creation of “deniable” encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent, and “functional” encryption, in which you can give chosen users different levels of access to perform computations using your data.

The new result should definitively silence the iO skeptics, Ishai said. “Now there will no longer be any doubts about the existence of indistinguishability obfuscation,” he said. “It seems like a happy end.”

The Crown Jewel

For decades, computer scientists wondered if there is any secure, all-encompassing way to obfuscate computer programs, allowing people to use them without figuring out their internal secrets. Program obfuscation would enable a host of useful applications: For instance, you could use an obfuscated program to delegate particular tasks within your bank or email accounts to other individuals, without worrying that someone could use the program in a way it wasn’t intended for or read off your account passwords (unless the program was designed to output them).

But so far, all attempts to build practical obfuscators have failed. “The ones that have come out in real life are ludicrously broken, … typically within hours of release into the wild,” Sahai said. At best, they offer attackers a speed bump, he said.

In 2001, bad news came on the theoretical front too: The strongest form of obfuscation is impossible. Called black box obfuscation, it demands that attackers should be able to learn absolutely nothing about the program except what they can observe by using the program and seeing what it outputs. Some programs, Barak, Sahai and five other researchers showed, reveal their secrets so determinedly that they are impossible to obfuscate fully.

These programs, however, were specially concocted to defy obfuscation and bear little resemblance to real-world programs. So computer scientists hoped there might be some other kind of obfuscation that was weak enough to be feasible but strong enough to hide the kinds of secrets people actually care about. The same researchers who showed that black box obfuscation is impossible proposed one possible alternative in their paper: indistinguishability obfuscation.

On the face of it, iO doesn’t seem like an especially useful concept. Instead of requiring that a program’s secrets be hidden, it simply requires that the program be obfuscated enough that if you have two different programs that perform the same task, you can’t distinguish which obfuscated version came from which original version.

But iO is stronger than it sounds. For example, suppose you have a program that carries out some task related to your bank account, but the program contains your unencrypted password, making you vulnerable to anyone who gets hold of the program. Then — as long as there is some program out there that could perform the same task while keeping your password hidden — an indistinguishability obfuscator will be strong enough to successfully mask the password. After all, if it didn’t, then if you put both programs through the obfuscator, you’d be able to tell which obfuscated version came from your original program.

Over the years, computer scientists have shown that you can use iO as the basis for almost every cryptographic protocol you could imagine (except for black box obfuscation). That includes both classic cryptographic tasks like public key encryption (which is used in online transactions) and dazzling newcomers like fully homomorphic encryption, in which a cloud computer can compute on encrypted data without learning anything about it. And it includes cryptographic protocols no one knew how to build, like deniable or functional encryption.

“It really is kind of the crown jewel” of cryptographic protocols, said Rafael Pass of Cornell University. “Once you achieve this, we can get essentially everything.”

In 2013, Sahai and five co-authors proposed an iO protocol that splits up a program into something like jigsaw puzzle pieces, then uses cryptographic objects called multilinear maps to garble the individual pieces. If the pieces are put together correctly, the garbling cancels out and the program functions as intended, but each individual piece looks meaningless. The result was hailed as a breakthrough and prompted dozens of follow-up papers. But within a few years, other researchers showed that the multilinear maps used in the garbling process were not secure. Other iO candidates came along and were broken in their turn.

“There was some worry that maybe this is just a mirage, maybe iO is simply impossible to get,” Barak said. People started to feel, he said, that “maybe this whole enterprise is doomed.”

Hiding Less to Hide More

In 2016, Lin started exploring whether it might be possible to get around the weaknesses of multilinear maps by simply demanding less of them. Multilinear maps are essentially just secretive ways of computing with polynomials — mathematical expressions made up of sums and products of numbers and variables, like 3xy + 2yz2. These maps, Jain said, entail something akin to a polynomial calculating machine connected to a system of secret lockers containing the values of the variables. A user who drops in a polynomial that the machine accepts gets to look inside one final locker to find out whether the hidden values make the polynomial evaluate to 0.

For the scheme to be secure, the user shouldn’t be able to figure out anything about the contents of the other lockers or the numbers that were generated along the way. “We would like that to be true,” Sahai said. But in all the candidate multilinear maps people could come up with, the process of opening the final locker revealed information about the calculation that was supposed to stay hidden.

Since the proposed multilinear map machines all had security weaknesses, Lin wondered if there was a way to build iO using machines that don’t have to compute as many different kinds of polynomials (and therefore might be easier to build securely). Four years ago, she figured out how to build iO using only multilinear maps that compute polynomials whose “degree” is 30 or less (meaning that every term is a product of at most 30 variables, counting repeats). Over the next couple of years, she, Sahai and other researchers gradually figured out how to bring the degree down even lower, until they were able to show how to build iO using just degree-3 multilinear maps.

On paper, it looked like a vast improvement. There was just one problem: From a security standpoint, “degree 3 was actually as broken” as the machines that can handle polynomials of every degree, Jain said.

The only multilinear maps researchers knew how to build securely were those that computed polynomials of degree 2 or less. Lin joined forces with Jain and Sahai to try to figure out how to construct iO from degree-2 multilinear maps. But “we were stuck for a very, very long time,” Lin said.

“It was kind of a gloomy time,” Sahai recalled. “There’s a graveyard filled with all the ideas that didn’t work.”

Eventually, though — together with Prabhanjan Ananth of the University of California, Santa Barbara and Christian Matt of the blockchain project Concordium — they came up with an idea for a sort of compromise: Since iO seemed to need degree-3 maps, but computer scientists only had secure constructions for degree-2 maps, what if there was something in between — a sort of degree-2.5 map?

The researchers envisioned a system in which some of the lockers have clear windows, so the user can see the values contained within. This frees the machine from having to protect too much hidden information. To strike a balance between the power of higher-degree multilinear maps and the security of degree-2 maps, the machine is allowed to compute with polynomials of degree higher than 2, but there’s a restriction: The polynomial must be degree 2 on the hidden variables. “We’re trying to not hide as much” as in general multilinear maps, Lin said. The researchers were able to show that these hybrid locker systems can be constructed securely.

But to get from these less powerful multilinear maps to iO, the team needed one last ingredient: a new kind of pseudo-randomness generator, something that expands a string of random bits into a longer string that still looks random enough to fool computers. That’s what Jain, Lin and Sahai have figured out how to do in their new paper. “There was a wonderful last month or so where everything came together in a flurry of phone calls,” Sahai said.

The result is an iO protocol that finally avoids the security weaknesses of multilinear maps. “Their work looks absolutely beautiful,” Pass said.

The scheme’s security rests on four mathematical assumptions that have been widely used in other cryptographic contexts. And even the assumption that has been studied the least, called the “learning parity with noise” assumption, is related to a problem that has been studied since the 1950s.

There is likely only one thing that could break the new scheme: a quantum computer, if a full-power one is ever built. One of the four assumptions is vulnerable to quantum attacks, but over the past few months a separate line of work has emerged in three separate papers by Pass and other researchers offering a different potential route to iO that might be secure even from quantum attacks. These versions of iO rest on less established security assumptions than the ones Jain, Lin and Sahai used, several researchers said. But it is possible, Barak said, that the two approaches could be combined in the coming years to create a version of iO that rests on standard security assumptions and also resists quantum attacks.

Jain, Lin and Sahai’s construction will likely entice new researchers into the field to work on making the scheme more practical and to develop new approaches, Ishai predicted. “Once you know that something is possible in principle, it makes it psychologically much easier to work in the area,” he said.

Computer scientists still have much work to do before the protocol (or some variation on it) can be used in real-world applications. But that is par for the course, researchers said. “There’s a lot of notions in cryptography that, when they first came out, people were saying, ‘This is just pure theory, [it] has no relevance to practice,’” Pass said. “Then 10 or 20 years later, Google is implementing these things.”

The road from a theoretical breakthrough to a practical protocol can be a long one, Barak said. “But you could imagine,” he said, “that maybe 50 years from now the crypto textbooks will basically say, ‘OK, here is a very simple construction of iO, and from that we’ll now derive all of the rest of crypto.’”

Correction: November 10, 2020

A previous mobile version of the “Degrees of Obfuscation” graphic was incorrectly labeled. Degree-2.5 maps can be built securely and can be used to make iO, as correctly labeled on the desktop version.

Source: https://www.quantamagazine.org/computer-scientists-achieve-crown-jewel-of-cryptography-20201110/

Continue Reading

Quantum

Inside the Secret Math Society Known Simply as Nicolas Bourbaki

Avatar

Published

on

Antoine Chambert-Loir’s initiation into one of math’s oldest secret societies began with a phone call. “They told me Bourbaki would like me to come and see if I’d work with them,” he said.

Chambert-Loir accepted, and for a week in September 2001 he spent seven hours a day reading math texts out loud and discussing them with the members of the group, whose identities are unknown to the rest of the world.

He was never officially asked to join, but on the last day he was given a long-term task — to finish a manuscript the group had been working on since 1975. When Chambert-Loir later received a report on the meeting he saw that he was listed as a “membrifié,” indicating he was part of the group. Ever since, he’s helped advance an almost Sisyphean tradition of math writing that predates World War II.

The group is known as “Nicolas Bourbaki” and is usually referred to as just Bourbaki. The name is a collective pseudonym borrowed from a real-life 19th-century French general who never had anything to do with mathematics. It’s unclear why they chose the name, though it may have originated in a prank played by the founding mathematicians as undergraduates at the École Normale Supérieure (ENS) in Paris.

“There was some custom to play pranks on first-years, and one of those pranks was to pretend that some General Bourbaki would arrive and visit the school and maybe give a totally obscure talk about mathematics,” said Chambert-Loir, a mathematician at the University of Paris who has acted as a spokesperson for the group and is its one publicly identified member.

Bourbaki began in 1934, the initiative of a small number of recent ENS alumni. Many of them were among the best mathematicians of their generation. But as they surveyed their field, they saw a problem. The exact nature of that problem is also the subject of myth.

In one telling, Bourbaki was a response to the loss of a generation of mathematicians to World War I, after which the group’s founders wanted to find a way to preserve what math knowledge remained in Europe.

“There is a story that young French mathematicians were not seen as a government priority during [the] First World War and many were sent to war and died there,” said Sébastien Gouëzel of the University of Rennes, who is not publicly identified with the group but, like many mathematicians, is familiar with its activities.

In a more prosaic but probably also more likely rendering, the original Bourbaki members were simply dissatisfied with the field’s textbooks and wanted to create better ones. “I think at the beginning it was just for that very concrete matter,” Chambert-Loir said.

Whatever their motivation, the founders of Bourbaki began to write. Yet instead of writing textbooks, they ended up creating something completely novel: free-standing books that explained advanced mathematics without reference to any outside sources.

The first Bourbaki text was meant to be about differential geometry, which reflected the tastes of some of the group’s early members, luminaries like Henri Cartan and André Weil. But the project quickly expanded, since it’s hard to explain one mathematical idea without involving many others.

“They realized that if they wanted to do this cleanly, they needed [ideas from other areas], and Bourbaki grew and grew into something huge,” Gouëzel said.

The most distinctive feature of Bourbaki was the writing style: rigorous, formal and stripped to the logical studs. The books spelled out mathematical theorems from the ground up without skipping any steps — exhibiting an unusual degree of thoroughness among mathematicians.

“In Bourbaki, essentially, there are no gaps,” Gouëzel said. “They are super precise.”

But that precision comes at a cost: Bourbaki books can be hard to read. They don’t offer a contextualizing narrative that explains where concepts come from, instead letting the ideas speak for themselves.

“Essentially, you give no comment about what you do or why you do it,” Chambert-Loir said. “You state stuff and prove it, and that’s it.”

Bourbaki joined its distinctive writing style to a distinctive writing process. Once a member produces a draft, the group gathers in person, reads it aloud and suggests notes for revision. They repeat these steps until there is unanimous agreement that the text is ready for publication. It’s a long process that can take a decade or more to complete.

This focus on collaboration is also where the group’s insistence on anonymity comes from. They keep membership secret to reinforce the notion that the books are a pure expression of mathematics as it is, not an individual’s take on the topic. It’s also an ethic that can seem out of step with aspects of modern math culture.

“It’s sort of hard to imagine a group of young academics right now, people without permanent lifelong positions, devoting a huge amount of time to something they’ll never get credit for,” said Lillian Pierce of Duke University. “This group took this on in a sort of selfless way.”

Bourbaki quickly had an impact on mathematics. Some of the first books, published in the 1940s and ’50s, invented vocabulary that is now standard — terms like “injective,” “surjective” and “bijective,” which are used to describe properties of a map between two sets.

This was the first of two main periods in which Bourbaki was especially influential. The second came in the 1970s when the group published a series of books on Lie groups and Lie algebras that is “unanimously considered a masterpiece,” Chambert-Loir said.

Today, the influence of the group’s books has waned. It’s best known instead for the Bourbaki Seminars, a series of high-profile lectures on the most important recent results in math, held in Paris. When Bourbaki invited Pierce to give one in June 2017, she recognized that the talk would take a lot of time to prepare, but she also knew that due to the seminar’s status in the field, “it’s an invitation you have to accept.”

Even while organizing (and attending) the public lectures, members of Bourbaki don’t disclose their identities. Pierce recalls that during her time in Paris she went out to lunch “with a number of people who it seemed fair to assume were part of Bourbaki, but in the spirit of things I didn’t try to hear their last names.”

According to Pierce, the anonymity is maintained only in a “spirit of fun” these days. “There is no rigor to the secrecy,” she said.

Though its seminars are now more influential than its books, Bourbaki — which has about 10 members currently — is still producing texts according to its founding principles. And Chambert-Loir, 49, is nearing the end of his time with the group, since custom has it that members step down when they turn 50.

Even as he prepares to leave, the project he was handed at the end of his first week remains unfinished. “For 15 years I patiently typed it into LaTeX, made corrections, then we read everything aloud year after year,” he said.

It could easily be half a century from the time the work began to when it’s completed.  That’s a long time by modern publishing standards, where papers land online even in draft form. But then again, maybe it isn’t so long when the product is meant to stand forever.

Correction: November 9, 2020

Lillian Pierce gave her Bourbaki Seminar talk in June 2017, not July 2017. The article has been revised accordingly.

Source: https://www.quantamagazine.org/inside-the-secret-math-society-known-as-nicolas-bourbaki-20201109/

Continue Reading
AR/VR4 hours ago

Rinlo, Paper Beast & More Win Raindance Immersive Awards

AI5 hours ago

upGrad Acquires TGA, Forays Into the INR 400 Billion Test Preparation Market in India

Cyber Security5 hours ago

5 Reasons You May Not Want to Install macOS 11

Bitcoin-IRA™-Accounts-Are-Soaring-as-Bitcoin-Passes-18000.jpg
AI5 hours ago

Bitcoin IRA Accounts Are Soaring as Bitcoin Passes $18,000

AI5 hours ago

Fauna Introduces New Capabilities to Help Development Teams Collaborate and Build More Secure and Responsive Business Applications

AI5 hours ago

Orange Business Services Forms Global Strategic Collaboration Agreement with Amazon Web Services to Accelerate Customers’ Innovation in the Cloud

AI5 hours ago

FDA Clears First of Its Kind Algorithm Suite for Personal ECG

AR/VR5 hours ago

The Free Asgard’s Wrath Offer Extended to all Oculus Quest 2 Owners

ACN Newswire6 hours ago

Trintech Client ABB Named Winner in 13th Annual Ventana Research Digital Leadership Awards

ACN Newswire8 hours ago

Redsun Services is Selected as a Constituent Stock of FTSE Global Micro Cap ex US Index

Cyber Security9 hours ago

Microsoft Released an Out-of-Band Update for Windows to Address Authentication Issues

Energy10 hours ago

Sinopec kickstarts extensive research on CO2 emissions peak and carbon neutral

PR Newswire11 hours ago

Adverty is granted US patent for in-game ad viewability technology BrainImpression™

PR Newswire11 hours ago

Oslo, Los Angeles, Cidade do México e Budapeste estão empenhados na construção limpa para levar a indústria da construção a um futuro sustentável

PR Newswire11 hours ago

Oslo, Los Angeles, Mexico et Budapest s’engagent à construire des bâtiments propres pour mener l’industrie du bâtiment vers un avenir durable

PR Newswire11 hours ago

OMRON Strengthens Engagement with AliveCor with Series E Investment, Reinforces Cross-company Alliance for Remote Cardiovascular Condition Management

PR Newswire11 hours ago

Evox Therapeutics expands its existing exosome patent portfolio

Energy11 hours ago

Husqvarna Group is further expanding its offering in surface preparation through the acquisition of Blastrac

Energy12 hours ago

Divestment by Sasol Chemicals North America LLC of its 50% Equity Interest in Gemini HDPE LLC

Energy13 hours ago

Trillium Gold Signs Agreement to Acquire 15,000 Hectares in the Confederation Greenstone Belt

Energy13 hours ago

Satellite imagery used for electricity consumption forecasting in Africa for the first time in new service

Cyber Security14 hours ago

Vulnerabilities Allowed Hackers to Change Passwords of TikTok Accounts

Press Releases16 hours ago

Fujitsu and LARUS Leverage Power of Graph Database and Graph Explainable AI Technologies to Strengthen Detection of Credit Card Fraud

Ecommerce18 hours ago

Four Seasons Total Landscaping and Ragstock team up for a Christmas…

Ecommerce18 hours ago

Kredivo Closes up to US$100m Warehouse Financing Facility with Victory…

Ecommerce18 hours ago

New York startup Mai launched with two social media unicorns in Asia…

Ecommerce18 hours ago

South Metro Housing Options joins the Rocky Mountain E-Purchasing…

Ecommerce18 hours ago

Online 529 Education Gifting Programs Fulfill Families’ Ecommerce…

AR/VR24 hours ago

New PlayStation VR Mega Pack Includes 5 Games and Camera Adaptor

AR/VR1 day ago

Ven VR Adventure Confirmed for Oculus Quest, Rift Launch in December,

Trending