Quantum
The shape of MIP* = RE
Published
9 months agoon
There’s a famous parable about a group of blind men encountering an elephant for the very first time. The first blind man, who had his hand on the elephant’s side, said that it was like an enormous wall. The second blind man, wrapping his arms around the elephant’s leg, exclaimed that surely it was a gigantic tree trunk. The third, feeling the elephant’s tail, declared that it must be a thick rope. Vehement disagreement ensues, but after a while the blind men eventually come to realize that, while each person was partially correct, there is much more to the elephant than initially thought.
Last month, Zhengfeng, Anand, Thomas, John and I posted MIP* = RE to arXiv. The paper feels very much like the elephant of the fable — and not just because of the number of pages! To a computer scientist, the paper is ostensibly about the complexity of interactive proofs. To a quantum physicist, it is talking about mathematical models of quantum entanglement. To the mathematician, there is a claimed resolution to a longstanding problem in operator algebras. Like the blind men of the parable, each are feeling a small part of a new phenomenon. How do the wall, the tree trunk, and the rope all fit together?
I’ll try to trace the outline of the elephant: it starts with a mystery in quantum complexity theory, curves through the mathematical foundations of quantum mechanics, and arrives at a deep question about operator algebras.
In 2004, computer scientists Cleve, Hoyer, Toner, and Watrous were thinking about a funny thing called nonlocal games. A nonlocal game involves three parties: two cooperating players named Alice and Bob, and someone called the verifier. The verifier samples a pair of random questions and sends to Alice (who responds with answer ), and to Bob (who responds with answer ). The verifier then uses some function that tells her whether the players win, based on their questions and answers.
All three parties know the rules of the game before it starts, and Alice and Bob’s goal is to maximize their probability of winning the game. The players aren’t allowed to communicate with each other during the game, so it’s a nontrivial task for them to coordinate an optimal strategy (i.e., how they should individually respond to the verifier’s questions) before the game starts.
The most famous example of a nonlocal game is the CHSH game (which has made several appearances on this blog already): in this game, the verifier sends a uniformly random bit to Alice (who responds with a bit ) and a uniformly random bit to Bob (who responds with a bit ). The players win if (in other words, the sum of their answer bits is equal to the product of the input bits modulo ).
What is Alice’s and Bob’s maximum winning probability? Well, it depends on what type of strategy they use. If they use a strategy that can be modeled by classical physics, then their winning probability cannot exceed (we call this the classical value of CHSH). On the other hand, if they use a strategy based on quantum physics, Alice and Bob can do better by sharing two quantum bits (qubits) that are entangled. During the game each player measures their own qubit (where the measurement depends on their received question) to obtain answers that win the CHSH game with probability (we call this the quantum value of CHSH). So even though the entangled qubits don’t allow Alice and Bob to communicate with each other, entanglement gives them a way to win with higher probability! In technical terms, their responses are more correlated than what is possible classically.
The CHSH game comes from physics, and was originally formulated not as a game involving Alice and Bob, but rather as an experiment involving two spatially separated devices to test whether strongerthanclassical correlations exist in nature. These experiments are known as Bell tests, named after John Bell. In 1964, he proved that correlations from quantum entanglement cannot be explained by any “local hidden variable theory” — in other words, a classical theory of physics.^{1} He then showed that a Bell test, like the CHSH game, gives a simple statistical test for the presence of nonlocal correlations between separated systems. Since the 1960s, numerous Bell tests have been conducted experimentally, and the verdict is clear: nature does not behave classically.
Cleve, Hoyer, Toner and Watrous noticed that nonlocal games/Bell tests can be viewed as a kind of multiprover interactive proof. In complexity theory, interactive proofs are protocols where some provers are trying to convince a verifier of a solution to a long, difficult computation, and the verifier is trying to efficiently determine if the solution is correct. In a Bell test, one can think of the provers as instead trying to convince the verifier of a physical statement: that they possess quantum entanglement.
With the computational lens trained firmly on nonlocal games, it then becomes natural to ask about the complexity of nonlocal games. Specifically, what is the complexity of approximating the optimal winning probability in a given nonlocal game ? In complexityspeak, this is phrased as a question about characterizing the class MIP* (pronounced “MIP star”). This is also a wellmotivated question for an experimentalist conducting Bell tests: at the very least, they’d want to determine if (a) quantum players can do better than classical players, and (b) what can the best possible quantum strategy achieve?
Studying this question in the case of classical players led to some of the most important results in complexity theory, such as MIP = NEXP and the PCP Theorem. Indeed, the PCP Theorem says that it is NPhard to approximate the classical value of a nonlocal game (i.e. the maximum winning probability of classical players) to within constant additive accuracy (say ). Thus, assuming that P is not equal to NP, we shouldn’t expect a polynomialtime algorithm for this. However it is easy to see that there is a “brute force” algorithm for this problem: by taking exponential time to enumerate over all possible deterministic player strategies, one can exactly compute the classical value of nonlocal games.
When considering games with entangled players, however, it’s not even clear if there’s a similar “brute force” algorithm that solves this in any amount of time — forget polynomial time; even if we allow ourselves exponential, doublyexponential, Ackermann function amount of time, we still don’t know how to solve this quantum value approximation problem. The problem is that there is no known upper bound on the amount of entanglement that is needed for players to play a nonlocal game. For example, for a given game , does an optimal quantum strategy require one qubit, ten qubits, or qubits of entanglement? Without any upper bound, a “brute force” algorithm wouldn’t know how big of a quantum strategy to search for — it would keep enumerating over bigger and bigger strategies in hopes of finding a better one.
Thus approximating the quantum value may not even be solvable in principle! But could it really be uncomputable? Perhaps we just haven’t found the right mathematical tool to give an upper bound on the dimension — maybe we just need to come up with some clever variant of, say, JohnsonLindenstrauss or some other dimension reduction technique.^{2}
In 2008, there was promising progress towards an algorithmic solution for this problem. Two papers [DLTW, NPA] (appearing on arXiv on the same day!) showed that an algorithm based on semidefinite programming can produce a sequence of numbers that converge to something called the commuting operator value of a nonlocal game.^{3} If one could show that the commuting operator value and the quantum value of a nonlocal game coincide, then this would yield an algorithm for solving this approximation problem!
Asking whether this commuting operator and quantum values are the same, however, immediately brings us to the precipice of some deep mysteries in mathematical physics and operator algebras, far removed from computer science and complexity theory. This takes us to the next part of the elephant.
The mystery about the quantum value versus the commuting operator value of nonlocal games has to do with two different ways of modeling Alice and Bob in quantum mechanics. As I mentioned earlier, quantum physics predicts that the maximum winning probability in, say, the CHSH game when Alice and Bob share entanglement is approximately 85%. As with any physical theory, these predictions are made using some mathematical framework — formal rules for modeling physical experiments like the CHSH game.
In a typical quantum information theory textbook, players in the CHSH game are usually modelled in the following way: Alice’s device is described a state space (all the possible states the device could be in), a particular state from , and a set of measurement operators (operations that can be performed by the device). It’s not necessary to know what these things are formally; the important feature is that these three things are enough to make any prediction about Alice’s device — when treated in isolation, at least. Similarly, Bob’s device can be described using its own state space , state , and measurement operators .
In the CHSH game though, one wants to make predictions about Alice’s and Bob’s devices together. Here the textbooks say that Alice and Bob are jointly described by the tensor product formalism, which is a natural mathematical way of “putting separate spaces together”. Their state space is denoted by . The joint state describing the devices comes from this tensor product space. When Alice and Bob independently make their local measurements, this is described by a measurement operator from the tensor product of operators from and . The strange correlations of quantum mechanics arise when their joint state is entangled, i.e. it cannot be written as a welldefined state on Alice’s side combined with a welldefined state on Bob’s side (even though the state space itself is two independent spaces combined together!)
The tensor product model works well; it satisfies natural properties you’d want from the CHSH experiment, such as the constraint that Alice and Bob can’t instantaneously signal to each other. Furthermore, predictions made in this model match up very accurately with experimental results!
This is the not the whole story, though. The tensor product formalism works very well in nonrelativistic quantum mechanics, where things move slowly and energies are low. To describe more extreme physical scenarios — like when particles are being smashed together at nearlight speeds in the Large Hadron Collider — physicists turn to the more powerful quantum field theory. However, the notion of spatiotemporal separation in relativistic settings gets especially tricky. In particular, when trying to describe quantum mechanical systems, it is no longer evident how to assign Alice and Bob their own independent state spaces, and thus it’s not clear how to put relativistic Alice and Bob in the tensor product framework!
In quantum field theory, locality is instead described using the commuting operator model. Instead of assigning Alice and Bob their own individual state spaces and then tensoring them together to get a combined space, the commuting operator model stipulates that there is just a single monolithic space for both Alice and Bob. Their joint state is described using a vector from , and Alice and Bob’s measurement operators both act on . The constraint that they can’t communicate is captured by the fact that Alice’s measurement operators commute with Bob’s operators. In other words, the order in which the players perform their measurements on the system does not matter: Alice measuring before Bob, or Bob measuring before Alice, both yield the same statistical outcomes. Locality is enforced through commutativity.
The commuting operator framework contains the tensor product framework as a special case^{4}, so it’s more general. Could the commuting operator model allow for correlations that can’t be captured by the tensor product model, even approximately^{5}^{6}? This question is known as Tsirelson’s problem, named after the late mathematician Boris Tsirelson.
There is a simple but useful way to phrase this question using nonlocal games. What we call the “quantum value” of a nonlocal game (denoted by ) really refers to the supremum of success probabilities over tensor product strategies for Alice and Bob. If they use strategies from the more general commuting operator model, then we call their maximum success probability the commuting operator value of (denoted by ). Since tensor product strategies are a special case of commuting operator strategies, we have the relation for all nonlocal games .
Could there be a nonlocal game whose tensor product value is different from its commuting operator value? With tongueincheek: is there a game that Alice and Bob could succeed at better if they were using quantum entanglement at nearlight speeds? It is difficult to find even a plausible candidate game for which the quantum and commuting operator values may differ. The CHSH game, for example, has the same quantum and commuting operator value; this was proved by Tsirelson.
If the tensor product and the commuting operator models are the same (i.e., the “positive” resolution of Tsirelson’s problem), then as I mentioned earlier, this has unexpected ramifications: there would be an algorithm for approximating the quantum value of nonlocal games.
How does this algorithm work? It comes in two parts: a procedure to search from below, and one to search from above. The “search from below” algorithm computes a sequence of numbers where is (approximately) the best winning probability when Alice and Bob use a qubit tensor product strategy. For fixed , the number can be computed by enumerating over (a discretization of) the space of all possible qubit strategies. This takes a doublyexponential amount of time in — but at least this is still a finite time! This naive “brute force” algorithm will slowly plod along, computing a sequence of better and better winning probabilities. We’re guaranteed that in the limit as goes to infinity, the sequence converges to the quantum value . Of course the issue is that the “search from below” procedure never knows how close it is to the true quantum value.
This is where the “search from above” comes in. This is an algorithm that computes a different sequence of numbers where each is an upper bound on the commuting operator value , and furthermore as goes to infinity, eventually converges to . Furthermore, each can be computed by a technique known as semidefinite optimization; this was shown by the two papers I mentioned.
Let’s put the pieces together. If the quantum and commuting operator values of a game coincide (i.e. ), then we can run the “search from below” and “search from above” procedures in parallel, interleaving the computation of the and . Since both are guaranteed to converge to the quantum value, at some point the upper bound will come within some to the lower bound , and thus we would have homed in on (an approximation of) . There we have it: an algorithm to approximate the quantum value of games.
All that remains to do, surely, is to solve Tsirelson’s problem in the affirmative (that commuting operator correlations can be approximated by tensor product correlations), and then we could put this pesky question about the quantum value to rest. Right?
At the end of the 1920s, polymath extraordinaire John von Neumann formulated the first rigorous mathematical framework for the recently developed quantum mechanics. This framework, now familiar to physicists and quantum information theorists everywhere, posits that quantum states are vectors in a Hilbert space, and measurements are linear operators acting on those spaces. It didn’t take long for von Neumann to realize that there was a much deeper theory of operators on Hilbert spaces waiting to be discovered. With Francis Murray, in the 1930s he started to develop a theory of “rings of operators” — today these are called von Neumann algebras.
The theory of operator algebras has since flourished into a rich and beautiful area of mathematics. It remains inseparable from mathematical physics, but has established deep connections with subjects such as knot theory and group theory. One of the most important goals in operator algebras has been to provide a classification of von Neumann algebras. In their series of papers on the subject, Murray and von Neumann first showed that classifying von Neumann algebras reduces to understanding their factors, the atoms out of which all von Neumann algebras are built. Then, they showed that factors of von Neumann algebras come in one of three species: type , type , and type . Type factors were completely classified by Murray and von Neumann, and they made much progress on characterizing certain type factors. However progress stalled until the 1970s, when Alain Connes provided a classification of type factors (work for which he would later receive the Fields Medal). In the same 1976 classification paper, Connes makes a casual remark about something called type factors^{7}:
We now construct an embedding of into . Apparently such an embedding ought to exist for all factors.
This line, written in almost a throwaway manner, eventually came to be called “Connes’ embedding problem”: does every separable factor embed into an ultrapower of the hyperfinite factor? It seems that Connes surmises that it does (and thus this is also called “Connes’ embedding conjecture“). Since 1976, this problem has grown into a central question of operator algebras, with numerous equivalent formulations and consequences across mathematics.
In 2010, two papers (again appearing on the arXiv on the same day!) showed that the reach of Connes’ embedding conjecture extends back to the foundations of quantum mechanics. If Connes’ embedding problem has a positive answer (i.e. an embedding exists), then Tsirelson’s problem (i.e. whether commuting operator can be approximated by tensor product correlations) also has a positive answer! Later it was shown by Ozawa that Connes’ embedding problem is in fact equivalent to Tsirelson’s problem.
Remember that our approach to compute the value of nonlocal games hinged on obtaining a positive answer to Tsirelson’s problem. The sequence of papers [NPA, DLTW, Fritz, JNPPSW] together show that resolving — one way or another — whether this searchfrombelow, searchfromabove algorithm works would essentially settle Connes’ embedding conjecture. What started as a funny question at the periphery of computer science and quantum information theory has morphed into an attack on one of the central problems in operator algebras.
We’ve now ended back where we started: the complexity of nonlocal games. Let’s take a step back and try to make sense of the elephant.
Even to a complexity theorist, “MIP* = RE” may appear esoteric. The complexity classes MIP* and RE refer to a bewildering grabbag of concepts: there’s Alice, Bob, Turing machines, verifiers, interactive proofs, quantum entanglement. What is the meaning of the equality of these two classes?
First, it says that the Halting problem has an interactive proof involving quantum entangled provers. In the Halting problem, you want to decide whether a Turing machine , if you started running it, would eventually terminate with a welldefined answer, or if it would get stuck in an infinite loop. Alan Turing showed that this problem is undecidable: there is no algorithm that can solve this problem in general. Loosely speaking, the best thing you can do is to just flick on the power switch to , and wait to see if it eventually stops. If gets stuck in an infinite loop — well, you’re going to be waiting forever.
MIP* = RE shows with the help of allpowerful Alice and Bob, a timelimited verifier can run an interactive proof to “shortcut” the waiting. Given the Turing machine ‘s description (its “source code”), the verifier can efficiently compute a description of a nonlocal game whose behavior reflects that of . If does eventually halt (which could happen after a million years), then there is a strategy for Alice and Bob that causes the verifier to accept with probability . In other words, . If gets stuck in an infinite loop, then no matter what strategy Alice and Bob use, the verifier always rejects with high probability, so is close to .
By playing this nonlocal game, the verifier can obtain statistical evidence that is a Turing machine that eventually terminates. If the verifier plays and the provers win, then the verifier should believe that it is likely that halts. If they lose, then the verifier concludes there isn’t enough evidence that halts^{8}. The verifier never actually runs in this game; she has offloaded the task to Alice and Bob, who we can assume are computational gods capable of performing millionyearlong computations instantly. For them, the challenge is instead to convince the verifier that if she were to wait millions of years, she would witness the termination of . Incredibly, the amount of work put in by the verifier in the interactive proof is independent of the time it takes for to halt!
The fact that the Halting problem has an interactive proof seems borderline absurd: if the Halting problem is unsolvable, why should we expect that it to be verifiable? Although complexity theory has taught us that there can be a large gap between the complexity of verification versus search, it has always been a difference of efficiency: if solutions to a problem can be efficiently verified, then solutions can also be found (albeit at drastically higher computational cost). MIP* = RE shows that, with quantum entanglement, there can be a chasm of computability between verifying solutions and finding them.
Now let’s turn to the noncomplexity consequences of MIP* = RE. The fact that we can encode the Halting problem into nonlocal games also immediately tells us that there is no algorithm whatsoever to approximate the quantum value. Suppose there was an algorithm that could approximate . Then, using the transformation from Turing machines to nonlocal games mentioned above, we could use this algorithm to solve the Halting problem, which is impossible.
Now the dominoes start to fall. This means that, in particular, the proposed “searchfrombelow”/”searchfromabove” algorithm cannot succeed in approximating . There must be a game , then, for which the quantum value is different from the commuting operator value. But this implies Tsirelson’s problem has a negative answer, and therefore Connes’ embedding conjecture is false.
We’ve only sketched the barest of outlines of this elephant, and yet it is quite challenging to hold it in the mind’s eye all at once^{9}. This story is intertwined with some of the most fundamental developments in the past century: modern quantum mechanics, operator algebras, and computability theory were birthed in the 1930s. Einstein, Podolsky and Rosen wrote their landmark paper questioning the nature of quantum entanglement in 1935, and John Bell discovered his famous test and inequality in 1964. Connes’ formulated his conjecture in the ’70s, Tsirelson made his contributions to the foundations of quantum mechanics in the ’80s, and about the same time computer scientists were inventing the theory of interactive proofs and probabilistically checkable proofs (PCPs).
We haven’t said anything about the proof of MIP* = RE yet (this may be the subject of future blog posts), but it is undeniably a product of complexity theory. The language of interactive proofs and Turing machines is not just convenient but necessary: at its heart MIP* = RE is the classical PCP Theorem, with the help of quantum entanglement, recursed to infinity.
What is going on in this proof? What parts of it are fundamental, and which parts are unnecessary? What is the core of it that relates to Connes’ embedding conjecture? Are there other consequences of this uncomputability result? These are questions to be explored in the coming days and months, and the answers we find will be fascinating.
Acknowledgments. Thanks to William Slofstra and Thomas Vidick for helpful feedback on this post.
Share this:
Like this:
Source: https://quantumfrontiers.com/2020/03/01/theshapeofmipre/
You may like

An ionic forcefield for nanoparticles: Tunable coating allows hitchhiking nanoparticles to slip past the immune system to their target

Allay Their Fears WoW: How to Complete the Quest

The Rock Expresses Interest in Fortnite Skin Modeled on Him

Doublelift Retires: A Tale of Glory, Downfall, and Legacy

Native Fortnite Locations: How to Complete the Challenge

Oneway street for electrons: Scientists observe directed energy transport between neighbouring molecules in a nanomaterial
Given that everything in the universe reduces to particles, a question presents itself: What are particles?
The easy answer quickly shows itself to be unsatisfying. Namely, electrons, photons, quarks and other “fundamental” particles supposedly lack substructure or physical extent. “We basically think of a particle as a pointlike object,” said Mary Gaillard, a particle theorist at the University of California, Berkeley who predicted the masses of two types of quarks in the 1970s. And yet particles have distinct traits, such as charge and mass. How can a dimensionless point bear weight?
“We say they are ‘fundamental,’” said XiaoGang Wen, a theoretical physicist at the Massachusetts Institute of Technology. “But that’s just a [way to say] to students, ‘Don’t ask! I don’t know the answer. It’s fundamental; don’t ask anymore.’”
With any other object, the object’s properties depend on its physical makeup — ultimately, its constituent particles. But those particles’ properties derive not from constituents of their own but from mathematical patterns. As points of contact between mathematics and reality, particles straddle both worlds with an uncertain footing.
When I recently asked a dozen particle physicists what a particle is, they gave remarkably diverse descriptions. They emphasized that their answers don’t conflict so much as capture different facets of the truth. They also described two major research thrusts in fundamental physics today that are pursuing a more satisfying, allencompassing picture of particles.
“‘What is a particle?’ indeed is a very interesting question,” said Wen. “Nowadays there is progress in this direction. I should not say there’s a unified point of view, but there’s several different points of view, and all look interesting.”
The quest to understand nature’s fundamental building blocks began with the ancient Greek philosopher Democritus’s assertion that such things exist. Two millennia later, Isaac Newton and Christiaan Huygens debated whether light is made of particles or waves. The discovery of quantum mechanics some 250 years after that proved both luminaries right: Light comes in individual packets of energy known as photons, which behave as both particles and waves.
Waveparticle duality turned out to be a symptom of a deep strangeness. Quantum mechanics revealed to its discoverers in the 1920s that photons and other quantum objects are best described not as particles or waves but by abstract “wave functions” — evolving mathematical functions that indicate a particle’s probability of having various properties. The wave function representing an electron, say, is spatially spread out, so that the electron has possible locations rather than a definite one. But somehow, strangely, when you stick a detector in the scene and measure the electron’s location, its wave function suddenly “collapses” to a point, and the particle clicks at that position in the detector.
A particle is thus a collapsed wave function. But what in the world does that mean? Why does observation cause a distended mathematical function to collapse and a concrete particle to appear? And what decides the measurement’s outcome? Nearly a century later, physicists have no idea.
The picture soon got even stranger. In the 1930s, physicists realized that the wave functions of many individual photons collectively behave like a single wave propagating through conjoined electric and magnetic fields — exactly the classical picture of light discovered in the 19th century by James Clerk Maxwell. These researchers found that they could “quantize” classical field theory, restricting fields so that they could only oscillate in discrete amounts known as the “quanta” of the fields. In addition to photons — the quanta of light — Paul Dirac and others discovered that the idea could be extrapolated to electrons and everything else: According to quantum field theory, particles are excitations of quantum fields that fill all of space.
In positing the existence of these more fundamental fields, quantum field theory stripped particles of status, characterizing them as mere bits of energy that set fields sloshing. Yet despite the ontological baggage of omnipresent fields, quantum field theory became the lingua franca of particle physics because it allows researchers to calculate with extreme precision what happens when particles interact — particle interactions being, at base level, the way the world is put together.
As physicists discovered more of nature’s particles and their associated fields, a parallel perspective developed. The properties of these particles and fields appeared to follow numerical patterns. By extending these patterns, physicists were able to predict the existence of more particles. “Once you encode the patterns you observe into the mathematics, the mathematics is predictive; it tells you more things you might observe,” explained Helen Quinn, an emeritus particle physicist at Stanford University.
The patterns also suggested a more abstract and potentially deeper perspective on what particles actually are.
Mark Van Raamsdonk remembers the beginning of the first class he took on quantum field theory as a Princeton University graduate student. The professor came in, looked out at the students, and asked, “What is a particle?”
“An irreducible representation of the Poincaré group,” a precocious classmate answered.
Taking the apparently correct definition to be general knowledge, the professor skipped any explanation and launched into an inscrutable series of lectures. “That entire semester I didn’t learn a single thing from the course,” said Van Raamsdonk, who’s now a respected theoretical physicist at the University of British Columbia.
It’s the standard deep answer of people in the know: Particles are “representations” of “symmetry groups,” which are sets of transformations that can be done to objects.
Take, for example, an equilateral triangle. Rotating it by 120 or 240 degrees, or reflecting it across the line from each corner to the midpoint of the opposite side, or doing nothing, all leave the triangle looking the same as before. These six symmetries form a group. The group can be expressed as a set of mathematical matrices — arrays of numbers that, when multiplied by coordinates of an equilateral triangle, return the same coordinates. Such a set of matrices is a “representation” of the symmetry group.
Similarly, electrons, photons and other fundamental particles are objects that essentially stay the same when acted on by a certain group. Namely, particles are representations of the Poincaré group: the group of 10 ways of moving around in the spacetime continuum. Objects can shift in three spatial directions or shift in time; they can also rotate in three directions or receive a boost in any of those directions. In 1939, the mathematical physicist Eugene Wigner identified particles as the simplest possible objects that can be shifted, rotated and boosted.
For an object to transform nicely under these 10 Poincaré transformations, he realized, it must have a certain minimal set of properties, and particles have these properties. One is energy. Deep down, energy is simply the property that stays the same when the object shifts in time. Momentum is the property that stays the same as the object moves through space.
A third property is needed to specify how particles change under combinations of spatial rotations and boosts (which, together, are rotations in spacetime). This key property is “spin.” At the time of Wigner’s work, physicists already knew particles have spin, a kind of intrinsic angular momentum that determines many aspects of particle behavior, including whether they act like matter (as electrons do) or as a force (like photons). Wigner showed that, deep down, “spin is just a label that particles have because the world has rotations,” said Nima ArkaniHamed, a particle physicist at the Institute for Advanced Study in Princeton, New Jersey.
Different representations of the Poincaré group are particles with different numbers of spin labels, or degrees of freedom that are affected by rotations. There are, for example, particles with three spin degrees of freedom. These particles rotate in the same way as familiar 3D objects. All matter particles, meanwhile, have two spin degrees of freedom, nicknamed “spinup” and “spindown,” which rotate differently. If you rotate an electron by 360 degrees, its state will be inverted, just as an arrow, when moved around a 2D Möbius strip, comes back around pointing the opposite way.
Elementary particles with one and five spin labels also appear in nature. Only a representation of the Poincaré group with four spin labels seems to be missing.
The correspondence between elementary particles and representations is so neat that some physicists — like Van Raamsdonk’s professor — equate them. Others see this as a conflation. “The representation is not the particle; the representation is a way of describing certain properties of the particle,” said Sheldon Glashow, a Nobel Prizewinning particle theorist and professor emeritus at Harvard University and Boston University. “Let us not confuse the two.”
Whether there’s a distinction or not, the relationship between particle physics and group theory grew both richer and more complicated over the course of the 20th century. The discoveries showed that elementary particles don’t just have the minimum set of labels needed to navigate spacetime; they have extra, somewhat superfluous labels as well.
Particles with the same energy, momentum and spin behave identically under the 10 Poincaré transformations, but they can differ in other ways. For instance, they can carry different amounts of electric charge. As “the whole particle zoo” (as Quinn put it) was discovered in the mid20th century, additional distinctions between particles were revealed, necessitating new labels dubbed “color” and “flavor.”
Just as particles are representations of the Poincaré group, theorists came to understand that their extra properties reflect additional ways they can be transformed. But instead of shifting objects in spacetime, these new transformations are more abstract; they change particles’ “internal” states, for lack of a better word.
Take the property known as color: In the 1960s, physicists ascertained that quarks, the elementary constituents of atomic nuclei, exist in a probabilistic combination of three possible states, which they nicknamed “red,” “green” and “blue.” These states have nothing to do with actual color or any other perceivable property. It’s the number of labels that matters: Quarks, with their three labels, are representations of a group of transformations called SU(3) consisting of the infinitely many ways of mathematically mixing the three labels.
While particles with color are representations of the symmetry group SU(3), particles with the internal properties of flavor and electric charge are representations of the symmetry groups SU(2) and U(1), respectively. Thus, the Standard Model of particle physics — the quantum field theory of all known elementary particles and their interactions — is often said to represent the symmetry group SU(3) × SU(2) × U(1), consisting of all combinations of the symmetry operations in the three subgroups. (That particles also transform under the Poincaré group is apparently too obvious to even mention.)
The Standard Model reigns half a century after its development. Yet it’s an incomplete description of the universe. Crucially, it’s missing the force of gravity, which quantum field theory can’t fully handle. Albert Einstein’s general theory of relativity separately describes gravity as curves in the spacetime fabric. Moreover, the Standard Model’s threepart SU(3) × SU(2) × U(1) structure raises questions. To wit: “Where the hell did all this come from?” as Dimitri Nanopoulos put it. “OK, suppose it works,” continued Nanopoulos, a particle physicist at Texas A&M University who was active during the Standard Model’s early days. “But what is this thing? It cannot be three groups there; I mean, ‘God’ is better than this — God in quotation marks.”
In the 1970s, Glashow, Nanopoulos and others tried fitting the SU(3), SU(2) and U(1) symmetries inside a single, larger group of transformations, the idea being that particles were representations of a single symmetry group at the beginning of the universe. (As symmetries broke, complications set in.) The most natural candidate for such a “grand unified theory” was a symmetry group called SU(5), but experiments soon ruled out that option. Other, less appealing possibilities remain in play.
Researchers placed even higher hopes in string theory: the idea that if you zoomed in enough on particles, you would see not points but onedimensional vibrating strings. You would also see six extra spatial dimensions, which string theory says are curled up at every point in our familiar 4D spacetime fabric. The geometry of the small dimensions determines the properties of strings and thus the macroscopic world. “Internal” symmetries of particles, like the SU(3) operations that transform quarks’ color, obtain physical meaning: These operations map, in the string picture, onto rotations in the small spatial dimensions, just as spin reflects rotations in the large dimensions. “Geometry gives you symmetry gives you particles, and all of this goes together,” Nanopoulos said.
However, if any strings or extra dimensions exist, they’re too small to be detected experimentally. In their absence, other ideas have blossomed. Over the past decade, two approaches in particular have attracted the brightest minds in contemporary fundamental physics. Both approaches refresh the picture of particles yet again.
The first of these research efforts goes by the slogan “itfromqubit,” which expresses the hypothesis that everything in the universe — all particles, as well as the spacetime fabric those particles stud like blueberries in a muffin — arises out of quantum bits of information, or qubits. Qubits are probabilistic combinations of two states, labeled 0 and 1. (Qubits can be stored in physical systems just as bits can be stored in transistors, but you can think of them more abstractly, as information itself.) When there are multiple qubits, their possible states can get tangled up, so that each one’s state depends on the states of all the others. Through these contingencies, a small number of entangled qubits can encode a huge amount of information.
In the itfromqubit conception of the universe, if you want to understand what particles are, you first have to understand spacetime. In 2010, Van Raamsdonk, a member of the itfromqubit camp, wrote an influential essay boldly declaring what various calculations suggested. He argued that entangled qubits might stitch together the spacetime fabric.
Calculations, thought experiments and toy examples going back decades suggest that spacetime has “holographic” properties: It’s possible to encode all information about a region of spacetime in degrees of freedom in one fewer dimension — often on the region’s surface. “In the last 10 years, we’ve learned a lot more about how this encoding works,” Van Raamsdonk said.
What’s most surprising and fascinating to physicists about this holographic relationship is that spacetime is bendy because it includes gravity. But the lowerdimensional system that encodes information about that bendy spacetime is a purely quantum system that lacks any sense of curvature, gravity or even geometry. It can be thought of as a system of entangled qubits.
Under the itfromqubit hypothesis, the properties of spacetime — its robustness, its symmetries — essentially come from the way 0s and 1s are braided together. The longstanding quest for a quantum description of gravity becomes a matter of identifying the qubit entanglement pattern that encodes the particular kind of spacetime fabric found in the actual universe.
So far, researchers know much more about how this all works in toy universes that have negatively curved, saddleshaped spacetime — mostly because they’re relatively easy to work with. Our universe, by contrast, is positively curved. But researchers have found, to their surprise, that anytime negatively curved spacetime pops up like a hologram, particles come along for the ride. That is, whenever a system of qubits holographically encodes a region of spacetime, there are always qubit entanglement patterns that correspond to localized bits of energy floating in the higherdimensional world.
Importantly, algebraic operations on the qubits, when translated in terms of spacetime, “behave just like rotations acting on the particles,” Van Raamsdonk said. “You realize there’s this picture being encoded by this nongravitational quantum system. And somehow in that code, if you can decode it, it’s telling you that there are particles in some other space.”
The fact that holographic spacetime always has these particle states is “actually one of the most important things that distinguishes these holographic systems from other quantum systems,” he said. “I think nobody really understands the reason why holographic models have this property.”
It’s tempting to picture qubits having some sort of spatial arrangement that creates the holographic universe, just as familiar holograms project from spatial patterns. But in fact, the qubits’ relationships and interdependencies might be far more abstract, with no real physical arrangement at all. “You don’t need to talk about these 0s and 1s living in a particular space,” said Netta Engelhardt, a physicist at MIT who recently won a New Horizons in Physics Prize for calculating the quantum information content of black holes. “You can talk about the abstract existence of 0s and 1s, and how an operator might act on 0s and 1s, and these are all much more abstract mathematical relations.”
There’s clearly more to understand. But if the itfromqubit picture is right, then particles are holograms, just like spacetime. Their truest definition is in terms of qubits.
Another camp of researchers who call themselves “amplitudeologists” seeks to return the spotlight to the particles themselves.
These researchers argue that quantum field theory, the current lingua franca of particle physics, tells far too convoluted a story. Physicists use quantum field theory to calculate essential formulas called scattering amplitudes, some of the most basic calculable features of reality. When particles collide, amplitudes indicate how the particles might morph or scatter. Particle interactions make the world, so the way physicists test their description of the world is to compare their scattering amplitude formulas to the outcomes of particle collisions in experiments such as Europe’s Large Hadron Collider.
Normally, to calculate amplitudes, physicists systematically account for all possible ways colliding ripples might reverberate through the quantum fields that pervade the universe before they produce stable particles that fly away from the crash site. Strangely, calculations involving hundreds of pages of algebra often yield, in the end, a oneline formula. Amplitudeologists argue that the field picture is obscuring simpler mathematical patterns. ArkaniHamed, a leader of the effort, called quantum fields “a convenient fiction.” “In physics very often we slip into a mistake of reifying a formalism,” he said. “We start slipping into the language of saying that it’s the quantum fields that are real, and particles are excitations. We talk about virtual particles, all this stuff — but it doesn’t go click, click, click in anyone’s detector.”
Amplitudeologists believe that a mathematically simpler and truer picture of particle interactions exists.
In some cases, they’re finding that Wigner’s group theory perspective on particles can be extended to describe interactions as well, without any of the usual rigmarole of quantum fields.
Lance Dixon, a prominent amplitudeologist at the SLAC National Accelerator Laboratory, explained that researchers have used the Poincaré rotations studied by Wigner to directly deduce the “threepoint amplitude” — a formula describing one particle splitting into two. They’ve also shown that threepoint amplitudes serve as the building blocks of four and higherpoint amplitudes involving more and more particles. These dynamical interactions seemingly build from the ground up out of basic symmetries.
“The coolest thing,” according to Dixon, is that scattering amplitudes involving gravitons, the putative carriers of gravity, turn out to be the square of amplitudes involving gluons, the particles that glue together quarks. We associate gravity with the fabric of spacetime itself, while gluons move around in spacetime. Yet gravitons and gluons seemingly spring from the same symmetries. “That’s very weird and of course not really understood in quantitative detail because the pictures are so different,” Dixon said.
ArkaniHamed and his collaborators, meanwhile, have found entirely new mathematical apparatuses that jump straight to the answer, such as the amplituhedron — a geometric object that encodes particle scattering amplitudes in its volume. Gone is the picture of particles colliding in spacetime and setting off chain reactions of cause and effect. “We’re trying to find these objects out there in the Platonic world of ideas that give us [causal] properties automatically,” ArkaniHamed said. “Then we can say, ‘Aha, now I can see why this picture can be interpreted as evolution.’”
Itfromqubit and amplitudeology approach the big questions so differently that it’s hard to say whether the two pictures complement or contradict each other. “At the end of the day, quantum gravity has some mathematical structure, and we’re all chipping away at it,” Engelhardt said. She added that a quantum theory of gravity and spacetime will ultimately be needed to answer the question, “What are the fundamental building blocks of the universe on its most fundamental scales?” — a more sophisticated phrasing of my question, “What is a particle?”
In the meantime, Engelhardt said, “‘We don’t know’ is the short answer.”
^{1}: “At the moment that I detect it, it collapses the wave and becomes a particle. … [The particle is] the collapsed wave function.”
—Dimitri Nanopoulos (back to article)
^{2}: “What is a particle from a physicist’s point of view? It’s a quantum excitation of a field. We write particle physics in a math called quantum field theory. In that, there are a bunch of different fields; each field has different properties and excitations, and they are different depending on the properties, and those excitations we can think of as a particle.”
—Helen Quinn (back to article)
^{3}: “Particles are at a very minimum described by irreducible representations of the Poincaré group.”
— Sheldon Glashow
“Ever since the fundamental paper of Wigner on the irreducible representations of the Poincaré group, it has been a (perhaps implicit) deﬁnition in physics that an elementary particle ‘is’ an irreducible representation of the group, G, of ‘symmetries of nature.’”
—Yuval Ne’eman and Shlomo Sternberg (back to article)
^{4}: “Particles have so many layers.”
—XiaoGang Wen (back to article)
^{5}: “What we think of as elementary particles, instead they might be vibrating strings.”
—Mary Gaillard (back to article)
^{6}: “Every particle is a quantized wave. The wave is a deformation of the qubit ocean.”
—XiaoGang Wen (back to article)
^{7}: “Particles are what we measure in detectors. … We start slipping into the language of saying that it’s the quantum fields that are real, and particles are excitations. We talk about virtual particles, all this stuff — but it doesn’t go click, click, click in anyone’s detector.”
—Nima ArkaniHamed (back to article)
Editor’s note: Mark Van Raamsdonk receives funding from the Simons Foundation, which also funds this editorially independent magazine. Simons Foundation funding decisions have no influence on our coverage. More details are available here.
Source: https://www.quantamagazine.org/whatisaparticle20201112/
Quantum
Physicists Pin Down Nuclear Reaction From Moments After the Big Bang
Published
2 weeks agoon
November 11, 2020In a secluded laboratory buried under a mountain in Italy, physicists have recreated a nuclear reaction that happened between two and three minutes after the Big Bang.
Their measurement of the reaction rate, published today in Nature, nails down the most uncertain factor in a sequence of steps known as Big Bang nucleosynthesis that forged the universe’s first atomic nuclei.
Researchers are “over the moon” about the result, according to Ryan Cooke, an astrophysicist at Durham University in the United Kingdom who wasn’t involved in the work. “There’ll be a lot of people who are interested from particle physics, nuclear physics, cosmology and astronomy,” he said.
The reaction involves deuterium, a form of hydrogen consisting of one proton and one neutron that fused within the cosmos’s first three minutes. Most of the deuterium quickly fused into heavier, stabler elements like helium and lithium. But some survived to the present day. “You have a few grams of deuterium in your body, which comes all the way from the Big Bang,” said Brian Fields, an astrophysicist at the University of Illinois, UrbanaChampaign.
The precise amount of deuterium that remains reveals key details about those first minutes, including the density of protons and neutrons and how quickly they became separated by cosmic expansion. Deuterium is “a special superwitness of that epoch,” said Carlo Gustavino, a nuclear astrophysicist at Italy’s National Institute for Nuclear Physics.
But physicists can only deduce those pieces of information if they know the rate at which deuterium fuses with a proton to form the isotope helium3. It’s this rate that the new measurement by the Laboratory for Underground Nuclear Astrophysics (LUNA) collaboration has pinned down.
The Earliest Probe of the Universe
Deuterium’s creation was the first step in Big Bang nucleosynthesis, a sequence of nuclear reactions that occurred when the cosmos was a super hot but rapidly cooling soup of protons and neutrons.
Starting in the 1940s, nuclear physicists developed a series of interlocking equations describing how various isotopes of hydrogen, helium and lithium assembled as nuclei merged and absorbed protons and neutrons. (Heavier elements were forged much later inside stars.) Researchers have since tested most aspects of the equations by replicating the primordial nuclear reactions in laboratories.
In doing so, they made radical discoveries. The calculations offered some of the first evidence of dark matter in the 1970s. Big Bang nucleosynthesis also enabled physicists to predict the number of different types of neutrinos, which helped drive cosmic expansion.
But for almost a decade now, uncertainty about deuterium’s likelihood of absorbing a proton and turning into helium3 has fogged up the picture of the universe’s first minutes. Most importantly, the uncertainty has prevented physicists from comparing that picture to what the cosmos looked like 380,000 years later, when the universe cooled enough for electrons to begin orbiting atomic nuclei. This process released radiation called the cosmic microwave background that provides a snapshot of the universe at the time.
Cosmologists want to check whether the density of the cosmos changed from one period to the other as expected based on their models of cosmic evolution. If the two pictures disagree, “that would be a really, really important thing to understand,” Cooke said. Solutions to stubbornly persistent cosmological problems — like the nature of dark matter — could be found in this gap, as could the first signs of exotic new particles. “A lot can happen between a minute or two after the Big Bang and several hundred thousand years after the Big Bang,” Cooke said.
But the allimportant deuterium reaction rate that would allow researchers to make these kinds of comparisons is very difficult to measure. “You’re simulating the Big Bang in the lab in a controlled way,” said Fields.
Physicists last attempted a measurement in 1997. Since then, observations of the cosmic microwave background have become increasingly precise, putting pressure on physicists who study Big Bang nucleosynthesis to match that precision — and so allow a comparison of the two epochs.
In 2014, Cooke and coauthors precisely measured the abundance of deuterium in the universe through observations of faraway gas clouds. But to translate this abundance into a precise prediction of the primordial matter density, they needed a much better measure of the deuterium reaction rate.
Confounding the situation further, a purely theoretical estimate for the rate, published in 2016, disagreed with the 1997 laboratory measurement.
“It was a very confused scenario,” said Gustavino, who is a member of the LUNA collaboration. “At this point I became pushy with the collaboration … because LUNA could measure this reaction exactly.”
A Rare Combination
Part of the challenge in measuring how readily deuterium fuses with a proton is that, under laboratory conditions, the reaction doesn’t happen very often. Every second, the LUNA experiment fires 100 trillion protons at a target of deuterium. Only a few a day will fuse.
Adding to the difficulty, cosmic rays that constantly rain down on Earth’s surface can mimic the signal produced by deuterium reactions. “For this reason, we’re in an underground laboratory where, thanks to the rock cover, we can benefit from cosmic silence,” said Francesca Cavanna, who led LUNA’s data collection and analysis along with Sandra Zavatarelli.
Over three years, the scientists took turns spending weeklong shifts in a lab deep inside Italy’s Gran Sasso mountain. “It’s exciting because you really feel you are inside the science,” Cavanna said. As they gradually collected data, pressure mounted from the wider physics community. “There was a lot of anticipation; there was a lot of expectation,” said Marialuisa Aliotta, a team member.
As it turns out, the team’s newly published measurement may come as a disappointment to cosmologists looking for cracks in their model of how the universe works.
Small Steps
The measured rate — which says how quickly deuterium tends to fuse with a proton to form helium3 across the range of temperatures found in the epoch of primordial nucleosynthesis — landed between the 2016 theoretical prediction and the 1997 measurement. More importantly, when physicists feed this rate into the equations of Big Bang nucleosynthesis, they predict a primordial matter density and a cosmic expansion rate that closely square with observations of the cosmic microwave background 380,000 years later.
“It essentially tells us that the standard model of cosmology is, so far, quite right,” said Aliotta.
That in itself squeezes the gap that nextgeneration models of the cosmos must fit into. Experts say some theories of dark matter could even be ruled out by the results.
That’s less exciting than evidence in favor of exotic new cosmic ingredients or effects. But in this era of precision astronomy, Aliotta said, scientists proceed “by making small steps.” Fields agreed: “We are constantly trying to do better on the prediction side, the measurement side and the observation side.”
On the horizon is the next generation of cosmic microwave background measurements. Meanwhile, with deuterium’s behavior now better understood, uncertainties in other primordial nuclear reactions and elemental abundances become more pressing.
A longstanding “fly in the Big Bang nucleosynthesis ointment,” according to Fields, is that the matter density calculated from deuterium and the cosmic microwave background predicts that there should be three times more lithium in the universe than we actually observe.
“There are still lots of unknowns,” said Aliotta. “And what the future will bring is going to be very interesting.”
Quantum
Computer Scientists Achieve ‘Crown Jewel’ of Cryptography
Published
3 weeks agoon
November 10, 2020In 2018, Aayush Jain, a graduate student at the University of California, Los Angeles, traveled to Japan to give a talk about a powerful cryptographic tool he and his colleagues were developing. As he detailed the team’s approach to indistinguishability obfuscation (iO for short), one audience member raised his hand in bewilderment.
“But I thought iO doesn’t exist?” he said.
At the time, such skepticism was widespread. Indistinguishability obfuscation, if it could be built, would be able to hide not just collections of data but the inner workings of a computer program itself, creating a sort of cryptographic master tool from which nearly every other cryptographic protocol could be built. It is “one cryptographic primitive to rule them all,” said Boaz Barak of Harvard University. But to many computer scientists, this very power made iO seem too good to be true.
Computer scientists set forth candidate versions of iO starting in 2013. But the intense excitement these constructions generated gradually fizzled out, as other researchers figured out how to break their security. As the attacks piled up, “you could see a lot of negative vibes,” said Yuval Ishai of the Technion in Haifa, Israel. Researchers wondered, he said, “Who will win: the makers or the breakers?”
“There were the people who were the zealots, and they believed in [iO] and kept working on it,” said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley. But as the years went by, she said, “there was less and less of those people.”
Now, Jain — together with Huijia Lin of the University of Washington and Amit Sahai, Jain’s adviser at UCLA — has planted a flag for the makers. In a paper posted online on August 18, the three researchers show for the first time how to build indistinguishability obfuscation using only “standard” security assumptions.
All cryptographic protocols rest on assumptions — some, such as the famous RSA algorithm, depend on the widely held belief that standard computers will never be able to quickly factor the product of two large prime numbers. A cryptographic protocol is only as secure as its assumptions, and previous attempts at iO were built on untested and ultimately shaky foundations. The new protocol, by contrast, depends on security assumptions that have been widely used and studied in the past.
“Barring a really surprising development, these assumptions will stand,” Ishai said.
While the protocol is far from ready to be deployed in realworld applications, from a theoretical standpoint it provides an instant way to build an array of cryptographic tools that were previously out of reach. For instance, it enables the creation of “deniable” encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent, and “functional” encryption, in which you can give chosen users different levels of access to perform computations using your data.
The new result should definitively silence the iO skeptics, Ishai said. “Now there will no longer be any doubts about the existence of indistinguishability obfuscation,” he said. “It seems like a happy end.”
The Crown Jewel
For decades, computer scientists wondered if there is any secure, allencompassing way to obfuscate computer programs, allowing people to use them without figuring out their internal secrets. Program obfuscation would enable a host of useful applications: For instance, you could use an obfuscated program to delegate particular tasks within your bank or email accounts to other individuals, without worrying that someone could use the program in a way it wasn’t intended for or read off your account passwords (unless the program was designed to output them).
But so far, all attempts to build practical obfuscators have failed. “The ones that have come out in real life are ludicrously broken, … typically within hours of release into the wild,” Sahai said. At best, they offer attackers a speed bump, he said.
In 2001, bad news came on the theoretical front too: The strongest form of obfuscation is impossible. Called black box obfuscation, it demands that attackers should be able to learn absolutely nothing about the program except what they can observe by using the program and seeing what it outputs. Some programs, Barak, Sahai and five other researchers showed, reveal their secrets so determinedly that they are impossible to obfuscate fully.
These programs, however, were specially concocted to defy obfuscation and bear little resemblance to realworld programs. So computer scientists hoped there might be some other kind of obfuscation that was weak enough to be feasible but strong enough to hide the kinds of secrets people actually care about. The same researchers who showed that black box obfuscation is impossible proposed one possible alternative in their paper: indistinguishability obfuscation.
On the face of it, iO doesn’t seem like an especially useful concept. Instead of requiring that a program’s secrets be hidden, it simply requires that the program be obfuscated enough that if you have two different programs that perform the same task, you can’t distinguish which obfuscated version came from which original version.
But iO is stronger than it sounds. For example, suppose you have a program that carries out some task related to your bank account, but the program contains your unencrypted password, making you vulnerable to anyone who gets hold of the program. Then — as long as there is some program out there that could perform the same task while keeping your password hidden — an indistinguishability obfuscator will be strong enough to successfully mask the password. After all, if it didn’t, then if you put both programs through the obfuscator, you’d be able to tell which obfuscated version came from your original program.
Over the years, computer scientists have shown that you can use iO as the basis for almost every cryptographic protocol you could imagine (except for black box obfuscation). That includes both classic cryptographic tasks like public key encryption (which is used in online transactions) and dazzling newcomers like fully homomorphic encryption, in which a cloud computer can compute on encrypted data without learning anything about it. And it includes cryptographic protocols no one knew how to build, like deniable or functional encryption.
“It really is kind of the crown jewel” of cryptographic protocols, said Rafael Pass of Cornell University. “Once you achieve this, we can get essentially everything.”
In 2013, Sahai and five coauthors proposed an iO protocol that splits up a program into something like jigsaw puzzle pieces, then uses cryptographic objects called multilinear maps to garble the individual pieces. If the pieces are put together correctly, the garbling cancels out and the program functions as intended, but each individual piece looks meaningless. The result was hailed as a breakthrough and prompted dozens of followup papers. But within a few years, other researchers showed that the multilinear maps used in the garbling process were not secure. Other iO candidates came along and were broken in their turn.
“There was some worry that maybe this is just a mirage, maybe iO is simply impossible to get,” Barak said. People started to feel, he said, that “maybe this whole enterprise is doomed.”
Hiding Less to Hide More
In 2016, Lin started exploring whether it might be possible to get around the weaknesses of multilinear maps by simply demanding less of them. Multilinear maps are essentially just secretive ways of computing with polynomials — mathematical expressions made up of sums and products of numbers and variables, like 3xy + 2yz^{2}. These maps, Jain said, entail something akin to a polynomial calculating machine connected to a system of secret lockers containing the values of the variables. A user who drops in a polynomial that the machine accepts gets to look inside one final locker to find out whether the hidden values make the polynomial evaluate to 0.
For the scheme to be secure, the user shouldn’t be able to figure out anything about the contents of the other lockers or the numbers that were generated along the way. “We would like that to be true,” Sahai said. But in all the candidate multilinear maps people could come up with, the process of opening the final locker revealed information about the calculation that was supposed to stay hidden.
Since the proposed multilinear map machines all had security weaknesses, Lin wondered if there was a way to build iO using machines that don’t have to compute as many different kinds of polynomials (and therefore might be easier to build securely). Four years ago, she figured out how to build iO using only multilinear maps that compute polynomials whose “degree” is 30 or less (meaning that every term is a product of at most 30 variables, counting repeats). Over the next couple of years, she, Sahai and other researchers gradually figured out how to bring the degree down even lower, until they were able to show how to build iO using just degree3 multilinear maps.
On paper, it looked like a vast improvement. There was just one problem: From a security standpoint, “degree 3 was actually as broken” as the machines that can handle polynomials of every degree, Jain said.
The only multilinear maps researchers knew how to build securely were those that computed polynomials of degree 2 or less. Lin joined forces with Jain and Sahai to try to figure out how to construct iO from degree2 multilinear maps. But “we were stuck for a very, very long time,” Lin said.
“It was kind of a gloomy time,” Sahai recalled. “There’s a graveyard filled with all the ideas that didn’t work.”
Eventually, though — together with Prabhanjan Ananth of the University of California, Santa Barbara and Christian Matt of the blockchain project Concordium — they came up with an idea for a sort of compromise: Since iO seemed to need degree3 maps, but computer scientists only had secure constructions for degree2 maps, what if there was something in between — a sort of degree2.5 map?
The researchers envisioned a system in which some of the lockers have clear windows, so the user can see the values contained within. This frees the machine from having to protect too much hidden information. To strike a balance between the power of higherdegree multilinear maps and the security of degree2 maps, the machine is allowed to compute with polynomials of degree higher than 2, but there’s a restriction: The polynomial must be degree 2 on the hidden variables. “We’re trying to not hide as much” as in general multilinear maps, Lin said. The researchers were able to show that these hybrid locker systems can be constructed securely.
But to get from these less powerful multilinear maps to iO, the team needed one last ingredient: a new kind of pseudorandomness generator, something that expands a string of random bits into a longer string that still looks random enough to fool computers. That’s what Jain, Lin and Sahai have figured out how to do in their new paper. “There was a wonderful last month or so where everything came together in a flurry of phone calls,” Sahai said.
The result is an iO protocol that finally avoids the security weaknesses of multilinear maps. “Their work looks absolutely beautiful,” Pass said.
The scheme’s security rests on four mathematical assumptions that have been widely used in other cryptographic contexts. And even the assumption that has been studied the least, called the “learning parity with noise” assumption, is related to a problem that has been studied since the 1950s.
There is likely only one thing that could break the new scheme: a quantum computer, if a fullpower one is ever built. One of the four assumptions is vulnerable to quantum attacks, but over the past few months a separate line of work has emerged in three separate papers by Pass and other researchers offering a different potential route to iO that might be secure even from quantum attacks. These versions of iO rest on less established security assumptions than the ones Jain, Lin and Sahai used, several researchers said. But it is possible, Barak said, that the two approaches could be combined in the coming years to create a version of iO that rests on standard security assumptions and also resists quantum attacks.
Jain, Lin and Sahai’s construction will likely entice new researchers into the field to work on making the scheme more practical and to develop new approaches, Ishai predicted. “Once you know that something is possible in principle, it makes it psychologically much easier to work in the area,” he said.
Computer scientists still have much work to do before the protocol (or some variation on it) can be used in realworld applications. But that is par for the course, researchers said. “There’s a lot of notions in cryptography that, when they first came out, people were saying, ‘This is just pure theory, [it] has no relevance to practice,’” Pass said. “Then 10 or 20 years later, Google is implementing these things.”
The road from a theoretical breakthrough to a practical protocol can be a long one, Barak said. “But you could imagine,” he said, “that maybe 50 years from now the crypto textbooks will basically say, ‘OK, here is a very simple construction of iO, and from that we’ll now derive all of the rest of crypto.’”
Correction: November 10, 2020
A previous mobile version of the “Degrees of Obfuscation” graphic was incorrectly labeled. Degree2.5 maps can be built securely and can be used to make iO, as correctly labeled on the desktop version.
Quantum
Inside the Secret Math Society Known Simply as Nicolas Bourbaki
Published
3 weeks agoon
November 9, 2020Antoine ChambertLoir’s initiation into one of math’s oldest secret societies began with a phone call. “They told me Bourbaki would like me to come and see if I’d work with them,” he said.
ChambertLoir accepted, and for a week in September 2001 he spent seven hours a day reading math texts out loud and discussing them with the members of the group, whose identities are unknown to the rest of the world.
He was never officially asked to join, but on the last day he was given a longterm task — to finish a manuscript the group had been working on since 1975. When ChambertLoir later received a report on the meeting he saw that he was listed as a “membrifié,” indicating he was part of the group. Ever since, he’s helped advance an almost Sisyphean tradition of math writing that predates World War II.
The group is known as “Nicolas Bourbaki” and is usually referred to as just Bourbaki. The name is a collective pseudonym borrowed from a reallife 19thcentury French general who never had anything to do with mathematics. It’s unclear why they chose the name, though it may have originated in a prank played by the founding mathematicians as undergraduates at the École Normale Supérieure (ENS) in Paris.
“There was some custom to play pranks on firstyears, and one of those pranks was to pretend that some General Bourbaki would arrive and visit the school and maybe give a totally obscure talk about mathematics,” said ChambertLoir, a mathematician at the University of Paris who has acted as a spokesperson for the group and is its one publicly identified member.
Bourbaki began in 1934, the initiative of a small number of recent ENS alumni. Many of them were among the best mathematicians of their generation. But as they surveyed their field, they saw a problem. The exact nature of that problem is also the subject of myth.
In one telling, Bourbaki was a response to the loss of a generation of mathematicians to World War I, after which the group’s founders wanted to find a way to preserve what math knowledge remained in Europe.
“There is a story that young French mathematicians were not seen as a government priority during [the] First World War and many were sent to war and died there,” said Sébastien Gouëzel of the University of Rennes, who is not publicly identified with the group but, like many mathematicians, is familiar with its activities.
In a more prosaic but probably also more likely rendering, the original Bourbaki members were simply dissatisfied with the field’s textbooks and wanted to create better ones. “I think at the beginning it was just for that very concrete matter,” ChambertLoir said.
Whatever their motivation, the founders of Bourbaki began to write. Yet instead of writing textbooks, they ended up creating something completely novel: freestanding books that explained advanced mathematics without reference to any outside sources.
The first Bourbaki text was meant to be about differential geometry, which reflected the tastes of some of the group’s early members, luminaries like Henri Cartan and André Weil. But the project quickly expanded, since it’s hard to explain one mathematical idea without involving many others.
“They realized that if they wanted to do this cleanly, they needed [ideas from other areas], and Bourbaki grew and grew into something huge,” Gouëzel said.
The most distinctive feature of Bourbaki was the writing style: rigorous, formal and stripped to the logical studs. The books spelled out mathematical theorems from the ground up without skipping any steps — exhibiting an unusual degree of thoroughness among mathematicians.
“In Bourbaki, essentially, there are no gaps,” Gouëzel said. “They are super precise.”
But that precision comes at a cost: Bourbaki books can be hard to read. They don’t offer a contextualizing narrative that explains where concepts come from, instead letting the ideas speak for themselves.
“Essentially, you give no comment about what you do or why you do it,” ChambertLoir said. “You state stuff and prove it, and that’s it.”
Bourbaki joined its distinctive writing style to a distinctive writing process. Once a member produces a draft, the group gathers in person, reads it aloud and suggests notes for revision. They repeat these steps until there is unanimous agreement that the text is ready for publication. It’s a long process that can take a decade or more to complete.
This focus on collaboration is also where the group’s insistence on anonymity comes from. They keep membership secret to reinforce the notion that the books are a pure expression of mathematics as it is, not an individual’s take on the topic. It’s also an ethic that can seem out of step with aspects of modern math culture.
“It’s sort of hard to imagine a group of young academics right now, people without permanent lifelong positions, devoting a huge amount of time to something they’ll never get credit for,” said Lillian Pierce of Duke University. “This group took this on in a sort of selfless way.”
Bourbaki quickly had an impact on mathematics. Some of the first books, published in the 1940s and ’50s, invented vocabulary that is now standard — terms like “injective,” “surjective” and “bijective,” which are used to describe properties of a map between two sets.
This was the first of two main periods in which Bourbaki was especially influential. The second came in the 1970s when the group published a series of books on Lie groups and Lie algebras that is “unanimously considered a masterpiece,” ChambertLoir said.
Today, the influence of the group’s books has waned. It’s best known instead for the Bourbaki Seminars, a series of highprofile lectures on the most important recent results in math, held in Paris. When Bourbaki invited Pierce to give one in June 2017, she recognized that the talk would take a lot of time to prepare, but she also knew that due to the seminar’s status in the field, “it’s an invitation you have to accept.”
Even while organizing (and attending) the public lectures, members of Bourbaki don’t disclose their identities. Pierce recalls that during her time in Paris she went out to lunch “with a number of people who it seemed fair to assume were part of Bourbaki, but in the spirit of things I didn’t try to hear their last names.”
According to Pierce, the anonymity is maintained only in a “spirit of fun” these days. “There is no rigor to the secrecy,” she said.
Though its seminars are now more influential than its books, Bourbaki — which has about 10 members currently — is still producing texts according to its founding principles. And ChambertLoir, 49, is nearing the end of his time with the group, since custom has it that members step down when they turn 50.
Even as he prepares to leave, the project he was handed at the end of his first week remains unfinished. “For 15 years I patiently typed it into LaTeX, made corrections, then we read everything aloud year after year,” he said.
It could easily be half a century from the time the work began to when it’s completed. That’s a long time by modern publishing standards, where papers land online even in draft form. But then again, maybe it isn’t so long when the product is meant to stand forever.
Correction: November 9, 2020
Lillian Pierce gave her Bourbaki Seminar talk in June 2017, not July 2017. The article has been revised accordingly.
An ionic forcefield for nanoparticles: Tunable coating allows hitchhiking nanoparticles to slip past the immune system to their target
Allay Their Fears WoW: How to Complete the Quest
The Rock Expresses Interest in Fortnite Skin Modeled on Him
Doublelift Retires: A Tale of Glory, Downfall, and Legacy
Native Fortnite Locations: How to Complete the Challenge
Oneway street for electrons: Scientists observe directed energy transport between neighbouring molecules in a nanomaterial
Quantum nanodiamonds may help detect disease earlier: The quantum sensing abilities of nanodiamonds can be used to improve the sensitivity of paperbased diagnostic tests, potentially allowing for earlier detection of diseases such as HIV, according to a study led by UCL research
Phytoplankton disturbed by nanoparticles: Due to its antibacterial properties, nanosilver is used in a wide range of products from textiles to cosmetics; but nanosilver if present at high concentrations also disrupts the metabolism of algae that are essential for the aquatic food
Russian scientists improve 3D printing technology for aerospace composites using oil waste
Pitt researchers create nanoscale slalom course for electrons: Professors from the Department of Physics and Astronomy have created a serpentine path for electrons
Original Content podcast: Just don’t watch Netflix’s ‘Holidate’ with your parents
XRI Launches Survey Examining Diversity & Inclusion Within XR Industry
Malta’s Pivots to Becoming a Fintech Haven
Black Friday on track for $8.9B+ in online sales as shoppers stay away from brickandmortar stores
Blockchain Voting is Possible: Dispelling Myths & Fears
Chinese Banks Suspend New Gold Trading Accounts Creation Amid Price Slump
Tony Hsieh, iconic Las Vegas tech entrepreneur, dies aged 46
NANO Rejected but Could Attempt Another Breakout Soon
How do Safe Online Sportsbooks Protect your Data?
OKEx Introduces One of Crypto’s Largest Loyalty Rewards Programs
Zoomlion begeistert auf der bauma China 2020 mit intelligenten Baumaschinen der nächsten Generation und sichert sich Aufträge im Wert von über 3 Milliarden USD
Ripple Is Cashing Out a Third of Its Stake in Surging MoneyGram
DreamHack Open November reaches playoffs
Zoomlion impressiona na Exposição bauma China 2020 com o lançamento de maquinários de construção inteligentes de última geração garantindo mais de US$ 3 bilhões em pedidos
mousesports move on to BLAST Premier Fall Showdown semis over Heroic
Zoomlion cautiva en la exposición bauma China 2020 con el lanzamiento de maquinaria de construcción inteligente de próxima generación que asegura más de 3 mil millones de dólares en pedidos
Insights on the Fatty Alcohol Global Market to 2026 – Featuring BASF, Ecogreen Oleochemicals & Godrej Industries Among Others
The Chemours Company Announces Completion of Private Offering of $800 Million Aggregate Principal Amount of 5.750% Senior Unsecured Notes Due 2028
Insights into Global High Voltage Switchgear Industries and Manufacturers to 2030 – by Equipment Type, Type of Use, Voltage Range and Customer Type
Zoomlion impressionne lors de la bauma China 2020 avec le lancement de machines de construction intelligentes de nouvelle génération lui permettant d’obtenir des commandes de plus de 3 G$
Trending

Energy5 days ago
ADAMA Mourns Death of Joe Krkoska, Executive Vice President of Global Operations

Blockchain1 week ago
Ethereum on a Tear as It Breaks New Yearly High

Blockchain1 week ago
Ethereum 2.0 Sees $10 Million Stake from DubaiBased Investment Firm

Cyber Security1 week ago
Where Machine Learning Is Transforming Finance

Blockchain1 week ago
Bitcoin Begins to Stall After Parabolic Ascent

Blockchain1 week ago
Deutsche Bank Says Investors Increasingly Prefer Bitcoin Over Gold as Inflation Hedge

Energy6 days ago
Firmy Shanghai Electric a Siemens Energy založí Centrum na podporu rozvoje chytré energetiky (Smart Energy Empowerment Center)

Energy6 days ago
Trina Solar Purchases 1.2 Billion units of 210mm Monocrystal Silicon Wafers in Cooperation with Zhonghuan