Quantum
Interaction + Entanglement = Efficient Proofs of Halting
Published
1 year agoon
A couple weeks ago my coauthors Zhengfeng Ji (UTS Sydney), Heny Yuen (University of Toronto) and Anand Natarajan and John Wright (both at Caltech’s IQIM, with John soon moving to UT Austin) & I posted a manuscript on the arXiv preprint server entitled
MIP*=RE
The magic of the singleletter formula quickly made its effect, and our posting received some attention on the blogosphere (see links below). Within computer science, complexity theory is at an advantage in its ability to capture powerful statements in few letters: who has not head of P, NP, and, for readers of this blog, BQP and QMA? (In contrast, I am under no illusion that my vague attempt at a more descriptive title has, by the time you reach this line, all but vanished from the reader’s memory.)
Even accounting for this popularity however, it is a safe bet that fewer of our readers have heard of MIP* or RE. Yet we are promised that the abovestated equality has great consequences for physics (“Tsirelson’s problem” in the study of nonlocality) and mathematics (“Connes’ embedding problem” in the theory of von Neumann algebras). How so — how can complexitytheoretic alphabet soup have any consequence for, on the one hand, physical reality, and on the other, abstract mathematics?
The goal of this post and the next one is to help the interested reader grasp the significance of interactive proofs (that lie between the symbols MIP*) and undecidability (that lies behind RE) for quantum mechanics.
The bulk of the present post is an almost identical copy of a post I wrote for my personal blog. To avoid accusations of selfplagiarism, I will substantiate it with a little picture and a story, see below. The post gives a very personal take on the research that led to the aforementioned result. In the next post, my coauthor Henry Yuen has offered to give a more scientific introduction to the result and its significance.
Before proceeding, it is important to make it clear that the research described in this post and the next has not been refereed or thoroughly vetted by the community. This process will take place over the coming months, and we should wait until it is completed before placing too much weight on the results. As an author, I am proud of my work; yet I am aware that there is due process to be made before the claims can be officialised. As such, these posts only represent my opinion (and Henry’s) and not necessarily that of the wider scientific community.
For more popular introductions to our result, see the blog posts of Scott Aaronson, Dick Lipton, and Gil Kalai and reporting by Davide Castelvecchi for Nature and Emily Conover for Science.
Now for the personal post…and the promised picture. Isn’t it beautiful? The design is courtesy of Tony Metger and Alexandru Gheorghiu, the first a visiting student and the second a postdoctoral scholar at Caltech’s IQIM. While Tony and Andru came up with the idea, the execution is courtesy of the bakery store employee, who graciously implemented the custom design (apparently writing equations on top of cakes is not common enough to be part of the standard offerings, so they had to go for the custom option). Although it is unclear if the executioner grasped the full depth of the signs they were copying, note how perfect the execution: not a single letter is out of place! Thanks to Tony, Andru, and the anonymous chef for the tasty souvenir.
Now for the story. In an earlier post on my personal research blog, I had reported on the beautiful recent result by Natarajan and Wright showing the astounding power of multiprover interactive proofs with quantum provers sharing entanglement: in letters, . In the remainder of this post I will describe our followup work with Ji, Natarajan, Wright, and Yuen. In this post I will tell the story from a personal point of view, with all the caveats that this implies: the “hard science” will be limited (but there could be a hint as to how “science”, to use a big word, “progresses”, to use an illdefined one; see also the upcoming post by Henry Yuen for more), the story is far too long, and it might be mostly of interest to me only. It’s a onesided story, but that has to be. (In particular below I may at times attribute credit in the form “X had this idea”. This is my recollection only, and it is likely to be inaccurate. Certainly I am ignoring a lot of important threads.) I wrote this because I enjoyed recollecting some of the best moments in the story just as much as some the hardest; it is fun to look back and find meanings in ideas that initially appeared disconnected. Think of it as an example of how different lines of work can come together in unexpected ways; a case for openended research. It’s also an antidote against despair that I am preparing for myself: whenever I feel I’ve been stuck on a project for far too long, I’ll come back to this post and ask myself if it’s been 14 years yet — if not, then press on.
It likely comes as a surprise to me only that I am no longer fresh out of the cradle. My academic life started in earnest some 14 years ago, when in the Spring of 2006 I completed my Masters thesis in Computer Science under the supervision of Julia Kempe, at Orsay in France. I had met Julia the previous term: her class on quantum computing was, by far, the besttaught and most exciting course in the Masters program I was attending, and she had gotten me instantly hooked. Julia agreed to supervise my thesis, and suggested that I look into some interesting recent result by Stephanie Wehner that linked the study of entanglement and nonlocality in quantum mechanics to complexitytheoretic questions about interactive proof systems (specifically, this was Stephanie’s paper showing that ).
At the time the topic was very new. It had been initiated the previous year with a beautiful paper by Cleve et al. (that I have recommended to many a student since!) It was a perfect fit for me: the mathematical aspects of complexity theory and quantum computing connected to my undergraduate background, while the relative concreteness of quantum mechanics (it is a physical theory after all) spoke to my desire for realworld connection (not “impact” or even “application” — just “connection”). Once I got myself up to speed in the area (which consisted of three papers: the two I already mentioned, together with a paper by Kobayashi and Matsumoto where they studied interactive proofs with quantum messages), Julia suggested looking into the “entangledprover” class introduced in the aforementioned paper by Cleve et al. Nothing was known about this class! Nothing besides the trivial inclusion of singleprover interactive proofs, IP, and the containment in…ALL, the trivial class that contains all languages.
Yet the characterization MIP=NEXP of its classical counterpart by Babai et al. in the 1990s had led to one of the most productive lines of work in complexity of the past few decades, through the PCP theorem and its use from hardness of approximation to efficient cryptographic schemes. Surely, studying had to be a productive direction? In spite of its wellestablished connection to classical complexity theory, via the formalism of interactive proofs, this was a real gamble. The study of entanglement from the complexitytheoretic perspective was entirely new, and bound to be fraught with difficulty; very few results were available and the existing lines of works, from the foundations of nonlocality to more recent endeavors in deviceindependent cryptography, provided little other starting point than strong evidence that even the simplest examples came with many unanswered questions. But my mentor was fearless, and far from a novice in terms of defraying new areas, having done pioneering work in areas ranging from quantum random walks to Hamiltonian complexity through adiabatic computation. Surely this would lead to something?
It certainly did. More sleepless nights than papers, clearly, but then the opposite would only indicate dullness. Julia’s question led to far more unexpected consequences than I, or I believe she, could have imagined at the time. I am writing this post to celebrate, in a personal way, the latest step in 15 years of research by dozens of researchers: today my coauthors and I uploaded to the quantph arXiv what we consider a complete characterization of the power of entangledprover interactive proof systems by proving the equality , the class of all recursively enumerable languages (a complete problem for RE is the halting problem). Without going too much into the result itself (if you’re interested, look for an upcoming post here that goes into the proof a bit more), and since this is a more personal post, I will continue on with some personal thoughts about the path that got us there.
When Julia & I started working on the question, our main source of inspiration were the results by Cleve et al. showing that the nonlocal correlations of entanglement had interesting consequences when seen through the lens of interactive proof systems in complexity theory. Since the EPR paper, a lot of work in understanding entanglement had already been accomplished in the Physics community, most notably by Mermin, Peres, Bell, and more recently the works in deviceindependent quantum cryptography by Acin, Pironio, Scarani and many others, stimulated by Ekert’s proposal for quantum key distribution and Mayers and Yao’s idea for “deviceindependent cryptography”. By then we certainly knew that “spooky actionatadistance” did not entail any fasterthanlight communication, and indeed was not really “actionatadistance” in the first place but merely “correlationatadistance”. What Cleve et al. recognized is that these “spooky correlationsatadistance” were sufficiently special so as to not only give numerically different values in “Bell inequalities”, the tool invented by Bell to evidence nonlocality in quantum mechanics, but also have some potentially profound consequences in complexity theory.
In particular, examples such as the “Magic Square game” demonstrated that enough correlation could be gained from entanglement so as to defeat basic proof systems whose soundness relied only on the absence of communication between the provers, an assumption that until then had been wrongly equated with the assumption that any computation performed by the provers could be modeled entirely locally. I think that the fallacy of this implicit assumption came as a surprise to complexity theorists, who may still not have entirely internalized it. Yet the perfect quantum strategy for the Magic Square game provides a very concrete “counterexample” to the soundness of the “clausevsvariable” game for 3SAT. Indeed this game, a reformulation by Aravind and CleveMermin of a Bell Inequality discovered by Mermin and Peres in 1990, can be easily reframed as a 3SAT system of equations that is not satisfiable, and yet is such that the associated twoplayer clausevsvariable game has a perfect quantum strategy. It is this observation, made in the paper by Cleve et al., that gave the first strong hint that the use of entanglement in interactive proof systems could make many classical results in the area go awry.
By importing the study of nonlocality into complexity theory Cleve et al. immediately brought it into the realm of asymptotic analysis. Complexity theorists don’t study fixed objects, they study families of objects that tend to have a uniform underlying structure and whose interesting properties manifest themselves “in the limit”. As a result of this new perspective focus shifted from the study of single games or correlations to infinite families thereof. Some of the early successes of this translation include the “unbounded violations” that arose from translating asymptotic separations in communication complexity to the language of Bell inequalities and correlations (e.g. this paper). These early successes attracted the attention of some physicists working in foundations as well as some mathematical physicists, leading to a productive exploration that combined tools from quantum information, functional analysis and complexity theory.
The initial observations made by Cleve et al. had pointed to as a possibly interesting complexity class to study. Rather amazingly, nothing was known about it! They had shown that under strong restrictions on the verifier’s predicate (it should be an XOR of two answer bits), a collapse took place: by the work of Hastad, XORMIP equals NEXP, but is included in EXP. This seemed very fortuitous (the inclusion is proved via a connection with semidefinite programming that seems tied to the structure of XORMIP protocols): could entanglement induce a collapse of the entire, unrestricted class? We thought (at this point mostly Julia thought, because I had no clue) that this ought not to be the case, and so we set ourselves to show that the equality , that would directly parallel Babai et al.’s characterization MIP=NEXP, holds. We tried to show this by introducing techniques to “immunize” games against entanglement: modify an interactive proof system so that its structure makes it “resistant” to the kind of “nonlocal powers” that can be used to defeat the clausevsvariable game (witness the Magic Square). This was partially successful, and led to one of the papers I am most proud of — I am proud of it because I think it introduced elementary techniques (such as the use of the CauchySchwarz inequality — inside joke — more seriously, basic things such as “proverswitching”, “commutation tests”, etc.) that are now routine manipulations in the area. The paper was a hard sell! It’s good to remember the first rejections we received. They were not unjustified: the main point of criticism was that we were only able to establish a hardness result for exponentially small completenesssoundness gap. A result for such a small gap in the classical setting follows directly from a very elementary analysis based on the CookLevin theorem. So then why did we have to write so many pages (and so many applications of CauchySchwarz!) to arrive at basically the same result (with a )?
Eventually we got lucky and the paper was accepted to a conference. But the real problem, of establishing any nontrivial lower bound on the class with constant (or, in the absence of any parallel repetition theorem, inversepolynomial) completenesssoundness gap, remained. By that time I had transitioned from a Masters student in France to a graduate student in Berkeley, and the problem (pre)occupied me during some of the most difficult years of my Ph.D. I fully remember spending my first year entirely thinking about this (oh and sure, that systems class I had to pass to satisfy the Berkeley requirements), and then my second year — yet, getting nowhere. (I checked the arXiv to make sure I’m not making this up: two full years, no posts.) I am forever grateful to my fellow student Anindya De for having taken me out of the cycle of torture by knocking on my door with one of the most interesting questions I have studied, that led me into quantum cryptography and quickly resulted in an enjoyable paper. It was good to feel productive again! (Though the paper had fun reactions as well: after putting it on the arXiv we quickly heard from experts in the area that we had solved an irrelevant problem, and that we better learn about information theory — which we did, eventually leading to another paper, etc.) The project had distracted me and I set interactive proofs aside; clearly, I was stuck.
About a year later I visited IQC in Waterloo. I don’t remember in what context the visit took place. What I do remember is a meeting in the office of Tsuyoshi Ito, at the time a postdoctoral scholar at IQC. Tsuyoshi asked me to explain our result with Julia. He then asked a very pointed question: the bedrock for the classical analysis of interactive proof systems is the “linearity test” of BlumLubyRubinfeld (BLR). Is there any sense in which we could devise a quantum version of that test?
What a question! This was great. At first it seemed fruitless: in what sense could one argue that quantum provers apply a “linear function”? Sure, quantum mechanics is linear, but that is besides the point. The linearity is a property of the prover’s answers as a function of their question. So what to make of the quantum state, the inherent randomness, etc.?
It took us a few months to figure it out. Once we got there however, the answer was relatively simple — the prover should be making a questionindependent measurement that returns a linear function that it applies to its question in order to obtain the answer returned to the verifier — and it opened the path to our subsequent paper showing that the inclusion of NEXP in indeed holds. Tsuyoshi’s question about linearity testing had allowed us to make the connection with PCP techniques; from there to MIP=NEXP there was only one step to make, which is to analyze multilinearity testing. That step was suggested by my Ph.D. advisor, Umesh Vazirani, who was well aware of the many pathways towards the classical PCP theorem, since the theorem had been obtained in great part by his former student Sanjeev Arora. It took a lot of technical work, yet conceptually a single question from my coauthor had sufficed to take me out of a 3year slumber.
This was in 2012, and I thought we were done. For some reason the converse inclusion, of in NEXP, seemed to resist our efforts, but surely it couldn’t resist much longer. Navascues et al. had introduced a hierarchy of semidefinite programs that seemed to give the right answer (technically they could only show convergence to a relaxation, the commuting value, but that seemed like a technicality; in particular, the values coincide when restricted to finitedimensional strategies, which is all we computer scientists cared about). There were no convergence bounds on the hierarchy, yet at the same time commutative SDP hierarchies were being used to obtain very strong results in combinatorial optimization, and it seemed like it would only be a matter of time before someone came up with an analysis of the quantum case. (I had been trying to solve a related “dimension reduction problem” with Oded Regev for years, and we were making no progress; yet it seemed someone ought to!)
In Spring 2014 during an open questions session at a workshop at the Simons Institute in Berkeley Dorit Aharonov suggested that I ask the question of the possible inclusion of QMAEXP, the exponentialsizedproofs analogue of QMA, in . A stronger result than the inclusion of NEXP (under assumptions), wouldn’t it be a more natural “fully quantum” analogue of MIP=NEXP? Dorit’s suggestion was motivated by research on the “quantum PCP theorem”, that aims to establish similar hardness results in the realm of the local Hamiltonian problem; see e.g. this post for the connection. I had no idea how to approach the question — I also didn’t really believe the answer could be positive — but what can you do, if Dorit asks you something… So I reluctantly went to the board and asked the question. Joe Fitzsimons was in the audience, and he immediately picked it up! Joe had the fantastic ideas of using quantum errorcorrection, or more specifically secretsharing, to distribute a quantum proof among the provers. His enthusiasm overcame my skepticism, and we eventually showed the desired inclusion. Maybe was bigger than after all.
Our result, however, had a similar deficiency as the one with Julia, in that the completenesssoundness gap was exponentially small. Obtaining a result with a constant gap took 3 years of couple more years of work and the fantastic energy and insights of a Ph.D. student at MIT, Anand Natarajan. Anand is the first person I know of to have had the courage to dive into the most technical aspects of the analysis of the aforementioned results, while also bringing in the insights of a “true quantum information theorist” that were supported by Anand’s background in Physics and upbringing in the group of Aram Harrow at MIT. (In contrast I think of myself more as a “raw” mathematician; I don’t really understand quantum states other than as positivesemidefinite matrices…not that I understand math either of course; I suppose I’m some kind of a halfbaked mishmash.) Anand had many ideas but one of the most beautiful ones led to what he poetically called the “Pauli braiding test”, a “truly quantum” analogue of the BLR linearity test that amounts to doing two linearity tests in conjugate bases and piecing the results together into a robust test for {n}qubit entanglement (I wrote about our work on this here).
At approximately the same time, Zhengfeng Ji had another wonderful idea that was in some sense orthogonal to our work. (My interpretation of) Zhengfeng’s idea is that one can see an interactive proof system as a computation (verifierproververifier) and use Kitaev’s circuittoHamiltonian construction to transform the entire computation into a “quantum CSP” (in the same sense that the local Hamiltonian problem is a quantum analogue of classical constraint satisfaction problems (CSP)) that could then itself be verified by a quantum multiprover interactive proof system…with exponential gains in efficiency! Zhengfeng’s result implied an exponential improvement in complexity compared to the result by Julia and myself, showing inclusion of NEEXP, instead of NEXP, in . However, Zhengfeng’s technique suffered from the same exponentially small completenesssoundness gap as we had, so that the best lower bound on per se remained NEXP.
Both works led to followups. With Natarajan we promoted the Pauli braiding test into a “quantum lowdegree test” that allowed us to show the inclusion of QMAEXP into , with constant gap, thereby finally answering the question posed by Aharonov 4 years after it was asked. (I should also say that by then all results on started relying on a sequence of parallel repetition results shown by Bavarian, Yuen, and others; I am skipping this part.) In parallel, with Ji, Fitzsimons, and Yuen we showed that Ji’s compression technique could be “iterated” an arbitrary number of times. In fact, by going back to “first principles” and representing verifiers uniformly as Turing machines we realized that the compression technique could be used iteratively to (up to small caveats) give a new proof of the fact (first shown by Slofstra using an embedding theorem for finitely presented group) that the zerogap version of contains the halting problem. In particular, the entangled value is uncomputable! This was not the first time that uncomputability crops in to a natural problem in quantum computing (e.g. the spectral gap paper), yet it still surprises when it shows up. Uncomputable! How can anything be uncomputable!
As we were wrapping up our paper Henry Yuen realized that our “iterated compression of interactive proof systems” was likely optimal, in the following sense. Even a mild improvement of the technique, in the form of a slower closing of the completenesssoundness gap through compression, would yield a much stronger result: undecidability of the constantgap class . It was already known by work of Navascues et al., Fritz, and others, that such a result would have, if not surprising, certainly consequences that seemed like they would be taking us out of our depth. In particular, undecidability of any language in would imply a negative resolution to a series of equivalent conjectures in functional analysis, from Tsirelson’s problem to Connes’ Embedding Conjecture through Kirchberg’s QWEP conjecture. While we liked our result, I don’t think that we believed it could resolve any conjecture(s) in functional analysis.
So we moved on. At least I moved on, I did some cryptography for a change. But Anand Natarajan and his coauthor John Wright did not stop there. They had the last major insight in this story, which underlies their recent STOC best paper described in the previous post. Briefly, they were able to combine the two lines of work, by Natarajan & myself on lowdegree testing and by Ji et al. on compression, to obtain a compression that is specially tailored to the existing protocol for NEXP and compresses that protocol without reducing its completenesssoundness gap. This then let them show Ji’s result that contains NEEXP, but this time with constant gap! The result received welldeserved attention. In particular, it is the first in this line of works to not suffer from any caveats (such as a closing gap, or randomized reductions, or some kind of “unfair” tweak on the model that one could attribute the gain in power to), and it implies an unconditional separation between MIP and .
As they were putting the last touches on their result, suddenly something happened, which is that a path towards a much bigger result opened up. What Natarajan & Wright had achieved is a onestep gapless compression. In our iterated compression paper we had observed that iterated gapless compression would lead to , implying negative answers to the aforementioned conjectures. So then?
I suppose it took some more work, but in some way all the ideas had been laid out in the previous 15 years of work in the complexity of quantum interactive proof systems; we just had to put it together. And so a decade after the characterization QIP = PSPACE of singleprover quantum interactive proof systems, we have arrived at a characterization of quantum multiprover interactive proof systems, . With one author in common between the two papers: congratulations Zhengfeng!
Even though we just posted a paper, in a sense there is much more left to do. I am hopeful that our complexitytheoretic result will attract enough interest from the mathematicians’ community, and especially operator algebraists, for whom CEP is a central problem, that some of them will be willing to devote time to understanding the result. I also recognize that much effort is needed on our own side to make it accessible in the first place! I don’t doubt that eventually complexity theory will not be needed to obtain the purely mathematical consequences; yet I am hopeful that some of the ideas may eventually find their way into the construction of interesting mathematical objects (such as, who knows, a nonhyperlinear group).
That was a good Masters project…thanks Julia!
Share this:
Like this:
You may like

Blockcap Adds Canaan Miner Hardware for Boosting Hashing Power

Test Your Crypto Knowledge With BIC’s Weekly Quiz — March 6

John McAfee, The Software Mogul Charged With Cryptocurrency Fraud

Crypto Has Some Secret Powers – Discover Them With Student Coin

âWeâre in a very unusual situationâ: A 48year market vet breaks down why stocks are hurtling towards an 80% drop this year – and says gold will soar to $2,500 as soon as Q2

Google says it won’t stalk you across the web with cookies. Experts think it’s a clever ploy to consolidate its ads empire.
The mathematics community lost a titan with the passing last month of Isadore “Is” Singer. Born in Detroit in 1924, Is was a visionary, transcending divisions between fields of mathematics as well as those between mathematics and quantum physics. He pursued deep questions and inspired others in his original research, wideranging lectures, mentoring of young researchers and advocacy in the public sphere.
Mathematical discussions with Is were freewheeling. He wanted to get to the essence of the matter at hand, to understand and present new ideas in his own way. He constantly asked questions and provoked others to look deeper and more broadly. He knew no boundaries, which led him to forge deep connections between fields. Above all, he valued his freedom — the freedom to explore, the freedom to err, the freedom to create. A social mathematician by nature, he sought out and nurtured friendships with likespirited collaborators.
Is came to mathematics relatively late, having majored in physics as an undergraduate at the University of Michigan before going off to war in 1944. Upon his return, he went to graduate school in mathematics at the University of Chicago to better understand relativity and quantum mechanics, only to discover that mathematics was his true intellectual home.
One of his first loves in mathematics, differential geometry, employs calculus to study smooth shapes, called manifolds. (By contrast, algebraic geometry uses algebra — the number side of mathematics — to study shapes defined by polynomial equations.) Local quantities, such as curvature, are the province of the differential geometer, and questions in the subject quickly lead to differential equations, whose solutions carry geometric meaning. Global questions — for example: How many holes of a particular dimension does a given manifold have? — are the province of topology, which uses a whole different set of tools. The province of analysis, another early focal point for Is in mathematics, involves function spaces and the study of differential equations that special functions satisfy.
But Is’ world did not admit provinces. Local differential geometry, global topology and analysis are different aspects of geometry. Their interplay is where the real fun is, and it lies at the heart of his most celebrated achievement: the AtiyahSinger index theorem. Very roughly, this result produces a count of solutions to special differential equations on curved spaces in topological terms. It led to immediate advances in topology, geometry and analysis; surprisingly, it has ramifications in quantum physics as well.
Starting in 1950, Is spent most of his professional life at the Massachusetts Institute of Technology, save for a few years in the late 1970s and early 1980s at the University of California, Berkeley. After arriving at MIT, Is and fellow Midwesterner Warren Ambrose reshaped how we understand, teach and carry out research in differential geometry. In the mid1950s, Is spent a year at the Institute for Advanced Study in Princeton, New Jersey, then a locus of radical advances in topology and algebraic geometry. He encountered new vistas and met future collaborators, including Michael Atiyah, a kindred spirit whose mathematical homes in algebraic geometry and topology perfectly complemented Is’ expertise in differential geometry and analysis. Michael had a similar freewheeling, anythinggoes working style, and he too constantly sought deeper meanings and connections. The AtiyahSinger index theorem was born of this instinct, the direct result of a penetrating question Michael put to Is when the latter arrived for a sabbatical visit to the University of Oxford in 1962: Why is the Aroof genus of a spin manifold an integer?
The “Aroof genus” is a topological invariant of manifolds that is part of 1950s topology, guaranteed a priori only to be a rational number — a ratio of whole numbers. But topologists had proved that it is actually an integer for manifolds with a particular geometric feature: a spin structure. (Spinors, which are a kind of square root of vectors, had been introduced in algebra and also in physics as part of Paul Dirac’s theory of the electron. A spin structure on a manifold allows such square roots to exist.) Even though Michael and his collaborator Fritz Hirzebruch had proved the integrality as a consequence of their development of Ktheory, an important innovation in topology, Michael still sought insight that the proofs do not provide. The desire for a more profound understanding resonated deeply with Is, who was immediately hooked by the question. A positive whole number should count something, and a whole number which can be negative may be the difference between two positive whole numbers, each of which counts something. In the AtiyahSinger worldview, geometry is paramount, so whatever is being counted should be geometric.
Here, Is’ mastery of differential geometry came into play. Presumably inspired by the geometry of other integer invariants of the period, and based on his knowledge of Dirac’s theory, Is devised a version of the Dirac equation in differential geometry — it requires the spin structure which is at the heart of Michael’s question — and he conjectured that the Aroof genus measures the existence and uniqueness of solutions to that equation. This was Is’ response to the question. Michael immediately saw how to incorporate it into the Ktheory that he and Fritz had developed, and he quickly had the statement of the general index theorem in hand. The first proof, which involves a large dose of analysis, was done within the year.
The statement, the proof and the immediate applications which flowed from them brought together mathematical subdisciplines which in the prevailing ethos of the day often existed in noninteracting orbits. The circle of mathematical ideas around the AtiyahSinger theorem grew in the ensuing years. When the scope expanded even further in the mid1970s, Is again played a central role.
The impetus this time was the selfduality equation, a nonlinear differential equation that arises in quantum field theory. Is was very familiar with the socalled WuYang dictionary, which related gauge theory in physics to the structures in differential geometry he had learned from ShiingShen Chern, one of his teachers at the University of Chicago a quarter century earlier. The dictionary had grown out of dialogues at Stony Brook University with Jim Simons, whom Is had mentored years before at MIT. (Simons went on to found the Simons Foundation, which also funds this editorially independent publication.) Is saw a role for geometry in illuminating what the physicists had discovered, and on a trip to Oxford in 1977 he posed the question, laying out the problem in a series of lectures. Those lectures and similar ones elsewhere inspired a burst of activity. Significantly, the nonlinearity of the equation led to a fascinating new web of mathematics in which algebraic geometry, topology, differential geometry and analysis are beautifully entwined, now with physics in the mix as well.
Is went further. He grasped early on, at a time when this was in no way apparent, that the “quantum” in quantum field theory is something that we geometers need to engage with directly and incorporate into our mathematics. It is difficult to convey how visionary Is was at that time. He led the way by grappling with the physics and presenting it on his own terms in courses on quantum field theory, supersymmetry and string theory at both Berkeley and MIT. His Tuesday seminar at Berkeley often featured physicists explaining the latest results, followed by a Chinese dinner at which our communal education in physics continued. Is’ position in the mathematics community and the force of his ideas brought more and more mathematicians on board. As a result of his leadership, by the mid1980s there was a vigorous interaction between quantum field theorists and geometers. As Is foresaw, the relationship grew and deepened. It continues to bear fruit for both fields.
Is’ leadership in mathematics and science extended to policy and community, where he engaged at a high level. To mention just one achievement, in the early 1980s Is teamed up with Chern and Cal Moore to found a new home for mathematics research, the Mathematical Sciences Research Institute. The fact that research institutes throughout the world now emulate this model is a tribute to the founders’ vision.
Of course, Is also enjoyed many aspects of life beyond mathematics and physics. His devotion to his family, and theirs to him, was paramount. Ambrose taught him to love jazz, another lifelong passion. And there was always tennis — Is played vigorously and enthusiastically into his 90s, constantly working on his game.
In fact, his longtime coach and close friend Jeff Bearup conjured an image of Is that perfectly captures the effect he had on people. He would arrive at his tennis club — or it could easily have been a math or physics department or a conference — and everyone would turn, smile, and shout out with admiration and respect: “Is!”
Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://www.quantamagazine.org/isadoresingerbridgedmathandphysics20210304/
Quantum
Imaginary Numbers May Be Essential for Describing Reality
Published
3 days agoon
March 3, 2021Mathematicians were disturbed, centuries ago, to find that calculating the properties of certain curves demanded the seemingly impossible: numbers that, when multiplied by themselves, turn negative.
All the numbers on the number line, when squared, yield a positive number; 2^{2} = 4, and (2)^{2} = 4. Mathematicians started calling those familiar numbers “real” and the apparently impossible breed of numbers “imaginary.”
Imaginary numbers, labeled with units of i (where, for instance, (2i)^{2} = 4), gradually became fixtures in the abstract realm of mathematics. For physicists, however, real numbers sufficed to quantify reality. Sometimes, socalled complex numbers, with both real and imaginary parts, such as 2 + 3i, have streamlined calculations, but in apparently optional ways. No instrument has ever returned a reading with an i.
Yet physicists may have just shown for the first time that imaginary numbers are, in a sense, real.
A group of quantum theorists designed an experiment whose outcome depends on whether nature has an imaginary side. Provided that quantum mechanics is correct — an assumption few would quibble with — the team’s argument essentially guarantees that complex numbers are an unavoidable part of our description of the physical universe.
“These complex numbers, usually they’re just a convenient tool, but here it turns out that they really have some physical meaning,” said Tamás Vértesi, a physicist at the Institute for Nuclear Research at the Hungarian Academy of Sciences who, years ago, argued the opposite. “The world is such that it really requires these complex” numbers, he said.
In quantum mechanics, the behavior of a particle or group of particles is encapsulated by a wavelike entity known as the wave function, or ψ. The wave function forecasts possible outcomes of measurements, such as an electron’s possible position or momentum. The socalled Schrödinger equation describes how the wave function changes in time — and this equation features an i.
Physicists have never been entirely sure what to make of this. When Erwin Schrödinger derived the equation that now bears his name, he hoped to scrub the i out. “What is unpleasant here, and indeed directly to be objected to, is the use of complex numbers,” he wrote to Hendrik Lorentz in 1926. “ψ is surely a fundamentally real function.”
Schrödinger’s desire was certainly plausible from a mathematical perspective: Any property of complex numbers can be captured by combinations of real numbers plus new rules to keep them in line, opening up the mathematical possibility of an allreal version of quantum mechanics.
Indeed, the translation proved simple enough that Schrödinger almost immediately discovered what he believed to be the “true wave equation,” one that eschewed i. “Another heavy stone has been rolled away from my heart,” he wrote to Max Planck less than a week after his letter to Lorentz. “It all came out exactly as one would have it.”
But using real numbers to simulate complex quantum mechanics is a clunky and abstract exercise, and Schrödinger recognized that his allreal equation was too cumbersome for daily use. Within a year he was describing wave functions as complex, just as physicists think of them today.
“Anybody wanting to get work done uses the complex description,” said Matthew McKague, a quantum computer scientist at the Queensland University of Technology in Australia.
Yet the real formulation of quantum mechanics has lingered as evidence that the complex version is merely optional. Teams including Vértesi and McKague, for instance, showed in 2008 and 2009 that — without an i in sight — they could perfectly predict the outcome of a famous quantum physics experiment known as the Bell test.
The new research, which was posted on the scientific preprint server arxiv.org in January, finds that those earlier Bell test proposals just didn’t go far enough to break the realnumber version of quantum physics. It proposes a more intricate Bell experiment that seems to demand complex numbers.
The earlier research led people to conclude that “in quantum theory complex numbers are only convenient, but not necessary,” wrote the authors, who include MarcOlivier Renou of the Institute of Photonic Sciences in Spain and Nicolas Gisin of the University of Geneva. “Here we prove this conclusion wrong.”
The group declined to discuss their paper publicly because it is still under peer review.
The Bell test demonstrates that pairs of farapart particles can share information in a single “entangled” state. If a quarter in Maine could become entangled with one in Oregon, for instance, repeated tosses would show that whenever one coin landed on heads, its distant partner would, bizarrely, show tails. Similarly, in the standard Bell test experiment, entangled particles are sent to two physicists, nicknamed Alice and Bob. They measure the particles and, upon comparing measurements, find that the results are correlated in a way that can’t be explained unless information is shared between the particles.
The upgraded experiment adds a second source of particle pairs. One pair goes to Alice and Bob. The second pair, originating from a different place, goes to Bob and a third party, Charlie. In quantum mechanics with complex numbers, the particles Alice and Charlie receive don’t need to be entangled with each other.
No realnumber description, however, can replicate the pattern of correlations that the three physicists will measure. The new paper shows that treating the system as real requires introducing extra information that usually resides in the imaginary part of the wave function. Alice’s, Bob’s, and Charlie’s particles must all share this information in order to reproduce the same correlations as standard quantum mechanics. And the only way to accommodate this sharing is for all of their particles to be entangled with one another.
In the previous incarnations of the Bell test, Alice and Bob’s electrons came from a single source, so the extra information they had to carry in the realnumber description wasn’t a problem. But in the twosource Bell test where Alice’s and Charlie’s particles come from independent sources, the fictitious threeparty entanglement doesn’t make physical sense.
Even without recruiting an Alice, a Bob and a Charlie to actually perform the experiment that the new paper imagines, most researchers feel extremely confident that standard quantum mechanics is correct and that the experiment would therefore find the expected correlations. If so, then real numbers alone cannot fully describe nature.
“The paper in fact establishes that there are genuine, complex quantum systems,” said Valter Moretti, a mathematical physicist at the University of Trento in Italy. “This result is quite unexpected to me.”
Nevertheless, odds are that the experiment will happen someday. It wouldn’t be simple, but no technical obstacles exist. And a deeper understanding of the behavior of more complicated quantum networks will grow only more relevant as researchers continue to link numerous Alices, Bobs and Charlies over emerging quantum internets.
“We therefore trust that a disproof of real quantum physics will arrive in a near future,” the authors wrote.
Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://www.quantamagazine.org/imaginarynumbersmaybeessentialfordescribingreality20210303/
Quantum
New Season of The Joy of x Podcast Explores Scientists’ Inner Lives
Published
4 days agoon
March 2, 2021I have a confession. As a producer working on the new season of the Joy of x podcast, sometimes I find myself editing episodes and thinking: Who let me in the room? It’s almost as if I’ve wandered into some hotel bar, and the last open seat is next to two folks deep in conversation. I’m listening in as they talk with intensity, passion. But they aren’t a romantic couple — they are sharing intimate details of their professional lives. Tales full of exploration and discovery. Of course, this scene is playing out in my imagination; the conversation I’m eavesdropping on is only in my headphones.
Our host, the mathematician and author Steven Strogatz, has a voracious intellectual curiosity, but it’s his warm and empathetic nature that makes listening to these interviews such a rewarding, even moving experience.
Every episode, I find myself learning something profound. Like when one of this season’s guests, the chemical nanoengineer Sharon Glotzer, talked about how, under the right circumstances, an “amorphous blob” of material can spontaneously change into “something with exquisite order to it.” Her professional life has been a quest to understand the interplay of forces in matter that create and destroy order.
Or when our guest Frank Wilczek, the Nobel Prizewinning physicist, marvels at the puzzles of subatomic forces: “What holds the nucleus together when electromagnetism wants to blow it apart?” he asked. These are explorers who have charted some of the universe’s great unknowns, and I’m riding shotgun with them as they take Steve on a tour through key moments in their journeys.
But right now, my second grader is rolling on the floor, screaming. Her teacher is shouting her name, trying to get her to return to the class Zoom, to unmute herself, to turn on her camera. This is a daily occurrence in our house.
Yours too, perhaps. We are not alone. Life during the pandemic has not only stressed and isolated us, it has heaped new distractions on us, making it harder to carve out time and mental space to reflect and learn. Kids aren’t the only ones in danger of falling behind intellectually.
That’s part of why I take such comfort in Steve’s intimate and lively conversations with his guests. His genuine curiosity about them and their work — and their sincerity in sharing about it — creates a calm but invigorating space for effortless learning. For those of us who love science, there is something deeply comforting about pressing play on The Joy of x.
But there’s more to it than that. We’ve been witnessing a global debate about the value and process of science playing out in our culture. The collective discussion of the SARSCoV2 virus and how it is transmitted has provided evidence for another crisis: The public’s scientific literacy and ability to engage with this crisis is fragile. For the last year, our collective hopes for ending the pandemic have been pinned on vaccine development, even as 31 million people follow antivaccine groups.
It strikes me that there’s never been a better time to get to know scientists — who they are, what makes them tick. Beyond teaching me about science, The Joy of x, like nothing else I’ve ever heard, makes the argument that a life in science is a life well spent. Steve Strogatz and his 12 guests this season are inspirational. The future needs scientists — of all kinds. Perhaps more than ever, we need people willing to dedicate their lives to this ambitious, optimistic, desperately important pursuit of truth.
You can subscribe to the podcast and listen to episodes from the first season on the Quanta Magazine website, Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. The first full episode is being posted today, and new episodes will premiere every Tuesday.
Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
The Easiest Way to Way To Trade Crypto.
Source: https://www.quantamagazine.org/newseasonofthejoyofxpodcastexploresscientistsinnerlives20210302/
Quantum
Selftesting with finite statistics enabling the certification of a quantum network link
Published
4 days agoon
March 2, 2021Quantum 5, 401 (2021).
https://doi.org/10.22331/q20210302401
Selftesting is a method to certify devices from the result of a Bell test. Although examples of noise tolerant selftesting are known, it is not clear how to deal efficiently with a finite number of experimental trials to certify the average quality of a device without assuming that it behaves identically at each run. As a result, existing selftesting results with finite statistics have been limited to guarantee the proper working of a device in just one of all experimental trials, thereby limiting their practical applicability. We here derive a method to certify through selftesting that a device produces states on average close to a Bell state without assumption on the actual state at each run. Thus the method is free of the I.I.D. (independent and identically distributed) assumption. Applying this new analysis on the data from a recent loopholefree Bell experiment, we demonstrate the successful distribution of Bell states over 398 meters with an average fidelity of $geq$55.50% at a confidence level of 99%. Being based on a Bell test free of detection and locality loopholes, our certification is evidently deviceindependent, that is, it does not rely on trust in the devices or knowledge of how the devices work. This guarantees that our link can be integrated in a quantum network for performing longdistance quantum communications with security guarantees that are independent of the details of the actual implementation.
Checkout PrimeXBT
Source: https://quantumjournal.org/papers/q20210302401/
PowerOfEvil on TSM’s Spring Split playoff preparation: ‘A lot of things are going to change in the next couple of days’
‘Bitcoin Senator’ Lummis Optimistic About Crypto Tax Reform
Betfred Sports, Represented by SCCG Management, Signs Multiyear Marketing Agreement with the Colorado Rockies
Dogecoin becomes the most popular cryptocurrency
Astra’s 100year plan: Q&A with CEO Chris Kemp
Critical Vulnerability Discovered in a Firewall Appliance Made by Genua
4parter on Coinbase “IPO” – Part 1 = 5 Reasons Why It Matters
‘Farpoint’ Studio Impulse Gear Announces a New VR Game Coming This Year
Bitcoin Price Analysis: Back Above $50K, But Facing Huge Resistance Now
Billionaire Hedge Fund Manager and a Former CFTC Chairman Reportedly Invested in Crypto Firm
Partners produce rotor blade 3Dprinted tool on Ingersoll 3D printer
NEXT Chain: New Generation Blockchain With Eyes on the DeFi Industry
What to expect tomorrow at TC Sessions: Justice 2021
Institutional Investors Continue to Buy Bitcoin as Price Tops $50K: Report
Instacart raises $265M at a $39B valuation
How AI is Transforming Cybersecurity in 2021?
Why March 2021 may see Bitcoin register yet another pullback
How to download Pokemon Unite APK, iOS, and Switch
QuickSwap DEX Offers Credit and Debit Card Support
Online learning platform Coursera files for U.S. IPO
Trending

AI1 week ago
IBM Reportedly Retreating from Healthcare with Watson

AI1 week ago
Tesla Working on Full SelfDriving Mode, Extending AI Lead

Automotive1 week ago
SpaceX rapidly builds, tests Starship Moon elevator for NASA

Automotive1 week ago
Tesla Model 3 captures IIHS Top Safety Pick+ Award for third straight year

Amb Crypto1 week ago
Oxygen Protocol to list OXY tokens on BitMax

Blockchain1 week ago
PancakeSwap Review: Leading AMM on Binance Smart Chain

AI1 week ago
Campaigners demand judicial review of NHS deal with Peter Thiel’s AI firm Palantir

AI1 week ago
SolarWinds Hackers Targeted Cloud Services as a Key Objective

Bioengineer1 week ago
Freshwater outflow from Beaufort Sea could alter global climate patterns

Energy1 week ago
Dominion plans to ditch all South Carolina coal by 2030

Nano Technology1 week ago
CEALeti & Dolphin Design Report FDSOI Breakthrough that Boosts Operating Frequency by 450% and Reduces Power Consumption by 30%: Joint Paper Presented at ISSCC 2021 Shows How New Adaptive BackBiasing Technique Overcomes Integration Limits in Chip Design Flows

Nano Technology1 week ago
CEALeti & Dolphin Design Report FDSOI Breakthrough that Boosts Operating Frequency by 450% and Reduces Power Consumption by 30%: Joint Paper Presented at ISSCC 2021 Shows How New Adaptive BackBiasing Technique Overcomes Integration Limits in Chip Design Flows