Connect with us

Quantum

Seven reasons why I chose to do science in the government

Avatar

Published

on

When I was in college, people asked me what I wanted to do with my life. I’d answer, “I want to be of use and to learn always.” The question resurfaced in grad school and at the beginning of my postdoc. I answered that I wanted to do extraordinary science that I’d steer. Academia attracted me most, but I wouldn’t discount alternatives.

Last spring, I accepted an offer to build my research group as a member of NIST, the National Institute for Standards and Technology in the U.S. government. My group will be headquartered on the University of Maryland campus, nestled amongst quantum and interdisciplinary institutes. I’m grateful to be joining NIST, and I’m surprised. I never envisioned myself working for the government. I could have accepted an assistant professorship (and I was extremely grateful for the offers), but NIST swept me off my feet. Here are seven reasons why, for other early-career researchers contemplating possibilities.

1) The science. One event illustrates this reason: The notice of my job offer came from NIST Maryland’s friendly neighborhood Nobel laureate. NIST and the university invested in quantum science years before everyone and her uncle began scrambling to create a quantum institute. That investment has flowered, including in reason (2).

2) The research environment. I wouldn’t say that I have a love affair with the University of Maryland. But I’ve found myself visiting every few years (sometimes blogging about the experience). Why? Much of the quantum community passes through Maryland. Seminars fill the week, visitors fill many offices, and conferences happen once or twice a year. Theorists and experimentalists mingle over lunch and collaborate. 

The university shares two quantum institutes with NIST: QuICS (the Joint Center for Quantum Information and Computer Science) and the JQI (the Joint Quantum Institute). My group will be based at the former and affiliated with the latter. We’ll also belong to IPST (the university’s Institute for Physical Science and Technology), a hub for interdisciplinarity and thermodynamics. When visiting a university, I ask how much researchers collaborate across department lines. I usually hear an answer along the lines of “We value interdisciplinarity, and we wish that we had more of it, but we don’t have much.” Few universities ingrain interdisciplinarity into their bones by dedicating institutes to it.

Maryland’s quantum community and thermodynamics communities bustle and produce. They grant NIST researchers an academic environment, independence to shape their research paths, and the freedom to participate in the broader scientific community. If weary of the three institutes mentioned above, one can explore the university’s Quantum Technology Center and Condensed-Matter-Theory Center

3) The people. The first Maryland quantum researcher I met was the friendly neighborhood Nobel laureate, Bill Phillips. Bill was presenting a keynote address at Dartmouth College’s physics department, where I’d earned my Bachelors. Bill said that he’d attended a small liberal-arts college before pursuing his PhD at MIT. During the question-and-answer session, I welcomed him back to a small liberal-arts college. How, I asked, had he benefited from the liberal arts? Juniata College, Bill said, had made him a good person. MIT had helped make him a good scientist. Since then, I’ve kept in occasional contact with Bill, we’ve attended talks of each other’s, and I’ve watched him exhibit the most curiosity I’ve seen in almost anyone. What more could one wish for in a colleague?

An equality used across thermodynamics bears Chris Jarzynski’s last name, but he never calls the equality what everyone else does. I benefited from Chris’s mentorship during my PhD, despite our working on opposite sides of the country. His awards include not only membership in the National Academy of Sciences, but also an Outstanding Referee designation, for reviewing so many journal submissions in service to the scientific community. Chris calls IPST, the university’s interdisciplinary and thermodynamic institute, his intellectual home. That recommendation suffices for me.

I’ve looked up to Alexey Gorshkov since beginning my PhD. I keep an eye out for Mohammad Hafezi’s and Pratyush Tiwari’s papers. A quantum researcher couldn’t ignore Chris Monroe’s papers if she tried. Postdoctoral and graduate fellowships stock the community with energetic young researchers. Three energetic researchers are joining QuICS as senior Fellows around the time I am. I’ll spare you the rest of my sources of inspiration.

4) The teaching. Most faculty members at R1 research universities teach two to three courses per year. NIST members can teach once every other year. I value teaching and appreciate how teaching benefits not only students, but also instructors. I respect teachers and remain grateful for their influence. I’m grateful to have received reports that I teach well. Because I’ve acquired some skill at communicating, people tend to assume that I adore teaching. I adore presenting talks, but I don’t feel a calling to teach. Mentors have exhorted me to pursue what excites me most and what only I can accomplish. I feel called to do research and to mentor younger researchers. 

Furthermore, if I had to teach much, I wouldn’t have time for writing anything other than papers or grants, such as blog posts. Some of you readers have astonished me with accounts of what my writing means to you. You’ve approached me at conferences, buttonholed me after seminars, and emailed. I’m grateful (as I keep saying, but I mean what I say) for the opportunity to touch lives across the world. I hope to inspire students to take quantum, information-theory, and thermodynamics courses (including the quantum-thermodynamics course that I’d like to teach occasionally). Instructors teach quantum courses throughout the world. No one else writes about Egyptian sarcophagi and the second law of thermodynamics, to my knowledge, or the Russian writer Alexander Pushkin and reproductive science. Perhaps no one should. But, since no one else does, I have to.1

5) The funding. Faculty members complain that they do little apart from applying for grants. Grants fund students, postdocs, travel, summer salaries, equipment, visitors, and workshops. NIST provides primary investigators with research funding every year. Not all the funding that some groups need, but enough to free up time to undertake the research that primary investigators love.

6) The lack of tenure stress. Many junior faculty members fear that they won’t achieve tenure. The fear pushes them away from taking risks in their research programs. This month, I embarked upon a risk that I know I should take but that, had I been facing an assistant professorship, would have given me pause.

7) The acronyms. Above, I introduced NIST (the National Institute of Standards and Technology), UMD (the University of Maryland), QuICS (the Joint Center for Quantum Information and Computer Science), the JQI (the Joint Quantum Institute), and IPST (the Institute for Physical Science and Technology). I’ll also have an affiliation with UMIACS (the University of Maryland Institute for Advanced Computer Science). Where else can one acquire six acronyms? I adore collecting affiliations, which force me to cross intellectual borders. I also enjoy the opportunity to laugh at my CV.

I’ve deferred joining NIST until summer 2021, to complete my postdoctoral fellowship at the Harvard-Smithsonian Institute for Theoretical Atomic, Molecular, and Optical Physics (an organization that needs its acronym, ITAMP, as much as “the Joint Center for Quantum Information and Computer Science” does). After then, please stop by. If you’d like to join my group, please email: I’m accepting applications for PhD and postdoctoral positions this fall. See you in Maryland next year.

1Also, blogging benefits my research. I’ll leave the explanation for another post.

I credit my husband with the Nesquick-NIST/QuICS parallel.

Source: https://quantumfrontiers.com/2020/10/25/seven-reasons-why-i-chose-to-do-science-in-the-government/

Quantum

What Is a Particle?

Avatar

Published

on

Given that everything in the universe reduces to particles, a question presents itself: What are particles?

The easy answer quickly shows itself to be unsatisfying. Namely, electrons, photons, quarks and other “fundamental” particles supposedly lack substructure or physical extent. “We basically think of a particle as a pointlike object,” said Mary Gaillard, a particle theorist at the University of California, Berkeley who predicted the masses of two types of quarks in the 1970s. And yet particles have distinct traits, such as charge and mass. How can a dimensionless point bear weight?

“We say they are ‘fundamental,’” said Xiao-Gang Wen, a theoretical physicist at the Massachusetts Institute of Technology. “But that’s just a [way to say] to students, ‘Don’t ask! I don’t know the answer. It’s fundamental; don’t ask anymore.’”

With any other object, the object’s properties depend on its physical makeup — ultimately, its constituent particles. But those particles’ properties derive not from constituents of their own but from mathematical patterns. As points of contact between mathematics and reality, particles straddle both worlds with an uncertain footing.

When I recently asked a dozen particle physicists what a particle is, they gave remarkably diverse descriptions. They emphasized that their answers don’t conflict so much as capture different facets of the truth. They also described two major research thrusts in fundamental physics today that are pursuing a more satisfying, all-encompassing picture of particles.

“‘What is a particle?’ indeed is a very interesting question,” said Wen. “Nowadays there is progress in this direction. I should not say there’s a unified point of view, but there’s several different points of view, and all look interesting.”

The quest to understand nature’s fundamental building blocks began with the ancient Greek philosopher Democritus’s assertion that such things exist. Two millennia later, Isaac Newton and Christiaan Huygens debated whether light is made of particles or waves. The discovery of quantum mechanics some 250 years after that proved both luminaries right: Light comes in individual packets of energy known as photons, which behave as both particles and waves.

Wave-particle duality turned out to be a symptom of a deep strangeness. Quantum mechanics revealed to its discoverers in the 1920s that photons and other quantum objects are best described not as particles or waves but by abstract “wave functions” — evolving mathematical functions that indicate a particle’s probability of having various properties. The wave function representing an electron, say, is spatially spread out, so that the electron has possible locations rather than a definite one. But somehow, strangely, when you stick a detector in the scene and measure the electron’s location, its wave function suddenly “collapses” to a point, and the particle clicks at that position in the detector.

A particle is thus a collapsed wave function. But what in the world does that mean? Why does observation cause a distended mathematical function to collapse and a concrete particle to appear? And what decides the measurement’s outcome? Nearly a century later, physicists have no idea.

The picture soon got even stranger. In the 1930s, physicists realized that the wave functions of many individual photons collectively behave like a single wave propagating through conjoined electric and magnetic fields — exactly the classical picture of light discovered in the 19th century by James Clerk Maxwell. These researchers found that they could “quantize” classical field theory, restricting fields so that they could only oscillate in discrete amounts known as the “quanta” of the fields. In addition to  photons — the quanta of light — Paul Dirac and others discovered that the idea could be extrapolated to electrons and everything else: According to quantum field theory, particles are excitations of quantum fields that fill all of space.

In positing the existence of these more fundamental fields, quantum field theory stripped particles of status, characterizing them as mere bits of energy that set fields sloshing. Yet despite the ontological baggage of omnipresent fields, quantum field theory became the lingua franca of particle physics because it allows researchers to calculate with extreme precision what happens when particles interact — particle interactions being, at base level, the way the world is put together.

As physicists discovered more of nature’s particles and their associated fields, a parallel perspective developed. The properties of these particles and fields appeared to follow numerical patterns. By extending these patterns, physicists were able to predict the existence of more particles. “Once you encode the patterns you observe into the mathematics, the mathematics is predictive; it tells you more things you might observe,” explained Helen Quinn, an emeritus particle physicist at Stanford University.
The patterns also suggested a more abstract and potentially deeper perspective on what particles actually are.

Mark Van Raamsdonk remembers the beginning of the first class he took on quantum field theory as a Princeton University graduate student. The professor came in, looked out at the students, and asked, “What is a particle?”

“An irreducible representation of the Poincaré group,” a precocious classmate answered.

Taking the apparently correct definition to be general knowledge, the professor skipped any explanation and launched into an inscrutable series of lectures. “That entire semester I didn’t learn a single thing from the course,” said Van Raamsdonk, who’s now a respected theoretical physicist at the University of British Columbia.

It’s the standard deep answer of people in the know: Particles are “representations” of “symmetry groups,” which are sets of transformations that can be done to objects.

Take, for example, an equilateral triangle. Rotating it by 120 or 240 degrees, or reflecting it across the line from each corner to the midpoint of the opposite side, or doing nothing, all leave the triangle looking the same as before. These six symmetries form a group. The group can be expressed as a set of mathematical matrices — arrays of numbers that, when multiplied by coordinates of an equilateral triangle, return the same coordinates. Such a set of matrices is a “representation” of the symmetry group.

Similarly, electrons, photons and other fundamental particles are objects that essentially stay the same when acted on by a certain group. Namely, particles are representations of the Poincaré group: the group of 10 ways of moving around in the space-time continuum. Objects can shift in three spatial directions or shift in time; they can also rotate in three directions or receive a boost in any of those directions. In 1939, the mathematical physicist Eugene Wigner identified particles as the simplest possible objects that can be shifted, rotated and boosted.

For an object to transform nicely under these 10 Poincaré transformations, he realized, it must have a certain minimal set of properties, and particles have these properties. One is energy. Deep down, energy is simply the property that stays the same when the object shifts in time. Momentum is the property that stays the same as the object moves through space.

A third property is needed to specify how particles change under combinations of spatial rotations and boosts (which, together, are rotations in space-time). This key property is “spin.” At the time of Wigner’s work, physicists already knew particles have spin, a kind of intrinsic angular momentum that determines many aspects of particle behavior, including whether they act like matter (as electrons do) or as a force (like photons). Wigner showed that, deep down, “spin is just a label that particles have because the world has rotations,” said Nima Arkani-Hamed, a particle physicist at the Institute for Advanced Study in Princeton, New Jersey.

Different representations of the Poincaré group are particles with different numbers of spin labels, or degrees of freedom that are affected by rotations. There are, for example, particles with three spin degrees of freedom. These particles rotate in the same way as familiar 3D objects. All matter particles, meanwhile, have two spin degrees of freedom, nicknamed “spin-up” and “spin-down,” which rotate differently. If you rotate an electron by 360 degrees, its state will be inverted, just as an arrow, when moved around a 2D Möbius strip, comes back around pointing the opposite way.

Elementary particles with one and five spin labels also appear in nature. Only a representation of the Poincaré group with four spin labels seems to be missing.

The correspondence between elementary particles and representations is so neat that some physicists — like Van Raamsdonk’s professor — equate them. Others see this as a conflation. “The representation is not the particle; the representation is a way of describing certain properties of the particle,” said Sheldon Glashow, a Nobel Prize-winning particle theorist and professor emeritus at Harvard University and Boston University. “Let us not confuse the two.”

Whether there’s a distinction or not, the relationship between particle physics and group theory grew both richer and more complicated over the course of the 20th century. The discoveries showed that elementary particles don’t just have the minimum set of labels needed to navigate space-time; they have extra, somewhat superfluous labels as well.

Particles with the same energy, momentum and spin behave identically under the 10 Poincaré transformations, but they can differ in other ways. For instance, they can carry different amounts of electric charge. As “the whole particle zoo” (as Quinn put it) was discovered in the mid-20th century, additional distinctions between particles were revealed, necessitating new labels dubbed “color” and “flavor.”

Just as particles are representations of the Poincaré group, theorists came to understand that their extra properties reflect additional ways they can be transformed. But instead of shifting objects in space-time, these new transformations are more abstract; they change particles’ “internal” states, for lack of a better word.

Take the property known as color: In the 1960s, physicists ascertained that quarks, the elementary constituents of atomic nuclei, exist in a probabilistic combination of three possible states, which they nicknamed “red,” “green” and “blue.” These states have nothing to do with actual color or any other perceivable property. It’s the number of labels that matters: Quarks, with their three labels, are representations of a group of transformations called SU(3) consisting of the infinitely many ways of mathematically mixing the three labels.

While particles with color are representations of the symmetry group SU(3), particles with the internal properties of flavor and electric charge are representations of the symmetry groups SU(2) and U(1), respectively. Thus, the Standard Model of particle physics — the quantum field theory of all known elementary particles and their interactions — is often said to represent the symmetry group SU(3) × SU(2) × U(1), consisting of all combinations of the symmetry operations in the three subgroups. (That particles also transform under the Poincaré group is apparently too obvious to even mention.)

The Standard Model reigns half a century after its development. Yet it’s an incomplete description of the universe. Crucially, it’s missing the force of gravity, which quantum field theory can’t fully handle. Albert Einstein’s general theory of relativity separately describes gravity as curves in the space-time fabric. Moreover, the Standard Model’s three-part SU(3) × SU(2) × U(1) structure raises questions. To wit: “Where the hell did all this come from?” as Dimitri Nanopoulos put it. “OK, suppose it works,” continued Nanopoulos, a particle physicist at Texas A&M University who was active during the Standard Model’s early days. “But what is this thing? It cannot be three groups there; I mean, ‘God’ is better than this — God in quotation marks.”

In the 1970s, Glashow, Nanopoulos and others tried fitting the SU(3), SU(2) and U(1) symmetries inside a single, larger group of transformations, the idea being that particles were representations of a single symmetry group at the beginning of the universe. (As symmetries broke, complications set in.) The most natural candidate for such a “grand unified theory” was a symmetry group called SU(5), but experiments soon ruled out that option. Other, less appealing possibilities remain in play.

Researchers placed even higher hopes in string theory: the idea that if you zoomed in enough on particles, you would see not points but one-dimensional vibrating strings. You would also see six extra spatial dimensions, which string theory says are curled up at every point in our familiar 4D space-time fabric. The geometry of the small dimensions determines the properties of strings and thus the macroscopic world. “Internal” symmetries of particles, like the SU(3) operations that transform quarks’ color, obtain physical meaning: These operations map, in the string picture, onto rotations in the small spatial dimensions, just as spin reflects rotations in the large dimensions. “Geometry gives you symmetry gives you particles, and all of this goes together,” Nanopoulos said.

However, if any strings or extra dimensions exist, they’re too small to be detected experimentally. In their absence, other ideas have blossomed. Over the past decade, two approaches in particular have attracted the brightest minds in contemporary fundamental physics. Both approaches refresh the picture of particles yet again.

The first of these research efforts goes by the slogan “it-from-qubit,” which expresses the hypothesis that everything in the universe — all particles, as well as the space-time fabric those particles stud like blueberries in a muffin — arises out of quantum bits of information, or qubits. Qubits are probabilistic combinations of two states, labeled 0 and 1. (Qubits can be stored in physical systems just as bits can be stored in transistors, but you can think of them more abstractly, as information itself.) When there are multiple qubits, their possible states can get tangled up, so that each one’s state depends on the states of all the others. Through these contingencies, a small number of entangled qubits can encode a huge amount of information.

In the it-from-qubit conception of the universe, if you want to understand what particles are, you first have to understand space-time. In 2010, Van Raamsdonk, a member of the it-from-qubit camp, wrote an influential essay boldly declaring what various calculations suggested. He argued that entangled qubits might stitch together the space-time fabric.

Calculations, thought experiments and toy examples going back decades suggest that space-time has “holographic” properties: It’s possible to encode all information about a region of space-time in degrees of freedom in one fewer dimension — often on the region’s surface. “In the last 10 years, we’ve learned a lot more about how this encoding works,” Van Raamsdonk said.

What’s most surprising and fascinating to physicists about this holographic relationship is that space-time is bendy because it includes gravity. But the lower-dimensional system that encodes information about that bendy space-time is a purely quantum system that lacks any sense of curvature, gravity or even geometry. It can be thought of as a system of entangled qubits.

Under the it-from-qubit hypothesis, the properties of space-time — its robustness, its symmetries — essentially come from the way 0s and 1s are braided together. The long-standing quest for a quantum description of gravity becomes a matter of identifying the qubit entanglement pattern that encodes the particular kind of space-time fabric found in the actual universe.

So far, researchers know much more about how this all works in toy universes that have negatively curved, saddle-shaped space-time — mostly because they’re relatively easy to work with. Our universe, by contrast, is positively curved. But researchers have found, to their surprise, that anytime negatively curved space-time pops up like a hologram, particles come along for the ride. That is, whenever a system of qubits holographically encodes a region of space-time, there are always qubit entanglement patterns that correspond to localized bits of energy floating in the higher-dimensional world.

Importantly, algebraic operations on the qubits, when translated in terms of space-time, “behave just like rotations acting on the particles,” Van Raamsdonk said. “You realize there’s this picture being encoded by this nongravitational quantum system. And somehow in that code, if you can decode it, it’s telling you that there are particles in some other space.”

The fact that holographic space-time always has these particle states is “actually one of the most important things that distinguishes these holographic systems from other quantum systems,” he said. “I think nobody really understands the reason why holographic models have this property.”

It’s tempting to picture qubits having some sort of spatial arrangement that creates the holographic universe, just as familiar holograms project from spatial patterns. But in fact, the qubits’ relationships and interdependencies might be far more abstract, with no real physical arrangement at all. “You don’t need to talk about these 0s and 1s living in a particular space,” said Netta Engelhardt, a physicist at MIT who recently won a New Horizons in Physics Prize for calculating the quantum information content of black holes. “You can talk about the abstract existence of 0s and 1s, and how an operator might act on 0s and 1s, and these are all much more abstract mathematical relations.”

There’s clearly more to understand. But if the it-from-qubit picture is right, then particles are holograms, just like space-time. Their truest definition is in terms of qubits.

Another camp of researchers who call themselves “amplitudeologists” seeks to return the spotlight to the particles themselves.

These researchers argue that quantum field theory, the current lingua franca of particle physics, tells far too convoluted a story. Physicists use quantum field theory to calculate essential formulas called scattering amplitudes, some of the most basic calculable features of reality. When particles collide, amplitudes indicate how the particles might morph or scatter. Particle interactions make the world, so the way physicists test their description of the world is to compare their scattering amplitude formulas to the outcomes of particle collisions in experiments such as Europe’s Large Hadron Collider.

Normally, to calculate amplitudes, physicists systematically account for all possible ways colliding ripples might reverberate through the quantum fields that pervade the universe before they produce stable particles that fly away from the crash site. Strangely, calculations involving hundreds of pages of algebra often yield, in the end, a one-line formula. Amplitudeologists argue that the field picture is obscuring simpler mathematical patterns. Arkani-Hamed, a leader of the effort, called quantum fields “a convenient fiction.” “In physics very often we slip into a mistake of reifying a formalism,” he said. “We start slipping into the language of saying that it’s the quantum fields that are real, and particles are excitations. We talk about virtual particles, all this stuff — but it doesn’t go click, click, click in anyone’s detector.”

Amplitudeologists believe that a mathematically simpler and truer picture of particle interactions exists.

In some cases, they’re finding that Wigner’s group theory perspective on particles can be extended to describe interactions as well, without any of the usual rigmarole of quantum fields.

Lance Dixon, a prominent amplitudeologist at the SLAC National Accelerator Laboratory, explained that researchers have used the Poincaré rotations studied by Wigner to directly deduce the “three-point amplitude” — a formula describing one particle splitting into two. They’ve also shown that three-point amplitudes serve as the building blocks of four- and higher-point amplitudes involving more and more particles. These dynamical interactions seemingly build from the ground up out of basic symmetries.

“The coolest thing,” according to Dixon, is that scattering amplitudes involving gravitons, the putative carriers of gravity, turn out to be the square of amplitudes involving gluons, the particles that glue together quarks. We associate gravity with the fabric of space-time itself, while gluons move around in space-time. Yet gravitons and gluons seemingly spring from the same symmetries. “That’s very weird and of course not really understood in quantitative detail because the pictures are so different,” Dixon said.

Arkani-Hamed and his collaborators, meanwhile, have found entirely new mathematical apparatuses that jump straight to the answer, such as the amplituhedron — a geometric object that encodes particle scattering amplitudes in its volume. Gone is the picture of particles colliding in space-time and setting off chain reactions of cause and effect. “We’re trying to find these objects out there in the Platonic world of ideas that give us [causal] properties automatically,” Arkani-Hamed said. “Then we can say, ‘Aha, now I can see why this picture can be interpreted as evolution.’”

It-from-qubit and amplitudeology approach the big questions so differently that it’s hard to say whether the two pictures complement or contradict each other. “At the end of the day, quantum gravity has some mathematical structure, and we’re all chipping away at it,” Engelhardt said. She added that a quantum theory of gravity and space-time will ultimately be needed to answer the question, “What are the fundamental building blocks of the universe on its most fundamental scales?” — a more sophisticated phrasing of my question, “What is a particle?”

In the meantime, Engelhardt said, “‘We don’t know’ is the short answer.”


1: “At the moment that I detect it, it collapses the wave and becomes a particle. … [The particle is] the collapsed wave function.”
—Dimitri Nanopoulos (back to article)

2: “What is a particle from a physicist’s point of view? It’s a quantum excitation of a field. We write particle physics in a math called quantum field theory. In that, there are a bunch of different fields; each field has different properties and excitations, and they are different depending on the properties, and those excitations we can think of as a particle.”
—Helen Quinn (back to article)

3: “Particles are at a very minimum described by irreducible representations of the Poincaré group.”
— Sheldon Glashow

“Ever since the fundamental paper of Wigner on the irreducible representations of the Poincaré group, it has been a (perhaps implicit) definition in physics that an elementary particle ‘is’ an irreducible representation of the group, G, of ‘symmetries of nature.’”
Yuval Ne’eman and Shlomo Sternberg (back to article)

4: “Particles have so many layers.”
—Xiao-Gang Wen (back to article)

5: “What we think of as elementary particles, instead they might be vibrating strings.”
—Mary Gaillard (back to article)

6: “Every particle is a quantized wave. The wave is a deformation of the qubit ocean.”
—Xiao-Gang Wen (back to article)

7: “Particles are what we measure in detectors. … We start slipping into the language of saying that it’s the quantum fields that are real, and particles are excitations. We talk about virtual particles, all this stuff — but it doesn’t go click, click, click in anyone’s detector.”
—Nima Arkani-Hamed (back to article)


Editor’s note: Mark Van Raamsdonk receives funding from the Simons Foundation, which also funds this editorially independent magazine. Simons Foundation funding decisions have no influence on our coverage. More details are available here.

Source: https://www.quantamagazine.org/what-is-a-particle-20201112/

Continue Reading

Quantum

Physicists Pin Down Nuclear Reaction From Moments After the Big Bang

Avatar

Published

on

In a secluded laboratory buried under a mountain in Italy, physicists have re-created a nuclear reaction that happened between two and three minutes after the Big Bang.

Their measurement of the reaction rate, published today in Nature, nails down the most uncertain factor in a sequence of steps known as Big Bang nucleosynthesis that forged the universe’s first atomic nuclei.

Researchers are “over the moon” about the result, according to Ryan Cooke, an astrophysicist at Durham University in the United Kingdom who wasn’t involved in the work. “There’ll be a lot of people who are interested from particle physics, nuclear physics, cosmology and astronomy,” he said.

The reaction involves deuterium, a form of hydrogen consisting of one proton and one neutron that fused within the cosmos’s first three minutes. Most of the deuterium quickly fused into heavier, stabler elements like helium and lithium. But some survived to the present day. “You have a few grams of deuterium in your body, which comes all the way from the Big Bang,” said Brian Fields, an astrophysicist at the University of Illinois, Urbana-Champaign.

The precise amount of deuterium that remains reveals key details about those first minutes, including the density of protons and neutrons and how quickly they became separated by cosmic expansion. Deuterium is “a special super-witness of that epoch,” said Carlo Gustavino, a nuclear astrophysicist at Italy’s National Institute for Nuclear Physics.

But physicists can only deduce those pieces of information if they know the rate at which deuterium fuses with a proton to form the isotope helium-3. It’s this rate that the new measurement by the Laboratory for Underground Nuclear Astrophysics (LUNA) collaboration has pinned down.

The Earliest Probe of the Universe

Deuterium’s creation was the first step in Big Bang nucleosynthesis, a sequence of nuclear reactions that occurred when the cosmos was a super hot but rapidly cooling soup of protons and neutrons.

Starting in the 1940s, nuclear physicists developed a series of interlocking equations describing how various isotopes of hydrogen, helium and lithium assembled as nuclei merged and absorbed protons and neutrons. (Heavier elements were forged much later inside stars.) Researchers have since tested most aspects of the equations by replicating the primordial nuclear reactions in laboratories.

In doing so, they made radical discoveries. The calculations offered some of the first evidence of dark matter in the 1970s. Big Bang nucleosynthesis also enabled physicists to predict the number of different types of neutrinos, which helped drive cosmic expansion.

But for almost a decade now, uncertainty about deuterium’s likelihood of absorbing a proton and turning into helium-3 has fogged up the picture of the universe’s first minutes. Most importantly, the uncertainty has prevented physicists from comparing that picture to what the cosmos looked like 380,000 years later, when the universe cooled enough for electrons to begin orbiting atomic nuclei. This process released radiation called the cosmic microwave background that provides a snapshot of the universe at the time.

Cosmologists want to check whether the density of the cosmos changed from one period to the other as expected based on their models of cosmic evolution. If the two pictures disagree, “that would be a really, really important thing to understand,” Cooke said. Solutions to stubbornly persistent cosmological problems — like the nature of dark matter — could be found in this gap, as could the first signs of exotic new particles. “A lot can happen between a minute or two after the Big Bang and several hundred thousand years after the Big Bang,” Cooke said.

But the all-important deuterium reaction rate that would allow researchers to make these kinds of comparisons is very difficult to measure. “You’re simulating the Big Bang in the lab in a controlled way,” said Fields.

Physicists last attempted a measurement in 1997. Since then, observations of the cosmic microwave background have become increasingly precise, putting pressure on physicists who study Big Bang nucleosynthesis to match that precision — and so allow a comparison of the two epochs.

In 2014, Cooke and co-authors precisely measured the abundance of deuterium in the universe through observations of faraway gas clouds. But to translate this abundance into a precise prediction of the primordial matter density, they needed a much better measure of the deuterium reaction rate.

Confounding the situation further, a purely theoretical estimate for the rate, published in 2016, disagreed with the 1997 laboratory measurement.

“It was a very confused scenario,” said Gustavino, who is a member of the LUNA collaboration. “At this point I became pushy with the collaboration … because LUNA could measure this reaction exactly.”

A Rare Combination

Part of the challenge in measuring how readily deuterium fuses with a proton is that, under laboratory conditions, the reaction doesn’t happen very often. Every second, the LUNA experiment fires 100 trillion protons at a target of deuterium. Only a few a day will fuse.

Adding to the difficulty, cosmic rays that constantly rain down on Earth’s surface can mimic the signal produced by deuterium reactions. “For this reason, we’re in an underground laboratory where, thanks to the rock cover, we can benefit from cosmic silence,” said Francesca Cavanna, who led LUNA’s data collection and analysis along with Sandra Zavatarelli.

Over three years, the scientists took turns spending weeklong shifts in a lab deep inside Italy’s Gran Sasso mountain. “It’s exciting because you really feel you are inside the science,” Cavanna said. As they gradually collected data, pressure mounted from the wider physics community. “There was a lot of anticipation; there was a lot of expectation,” said Marialuisa Aliotta, a team member.

As it turns out, the team’s newly published measurement may come as a disappointment to cosmologists looking for cracks in their model of how the universe works.

Small Steps

The measured rate — which says how quickly deuterium tends to fuse with a proton to form helium-3 across the range of temperatures found in the epoch of primordial nucleosynthesis — landed between the 2016 theoretical prediction and the 1997 measurement. More importantly, when physicists feed this rate into the equations of Big Bang nucleosynthesis, they predict a primordial matter density and a cosmic expansion rate that closely square with observations of the cosmic microwave background 380,000 years later.

“It essentially tells us that the standard model of cosmology is, so far, quite right,” said Aliotta.

That in itself squeezes the gap that next-generation models of the cosmos must fit into. Experts say some theories of dark matter could even be ruled out by the results.

That’s less exciting than evidence in favor of exotic new cosmic ingredients or effects. But in this era of precision astronomy, Aliotta said, scientists proceed “by making small steps.” Fields agreed: “We are constantly trying to do better on the prediction side, the measurement side and the observation side.”

On the horizon is the next generation of cosmic microwave background measurements. Meanwhile, with deuterium’s behavior now better understood, uncertainties in other primordial nuclear reactions and elemental abundances become more pressing.

A longstanding “fly in the Big Bang nucleosynthesis ointment,” according to Fields, is that the matter density calculated from deuterium and the cosmic microwave background predicts that there should be three times more lithium in the universe than we actually observe.

“There are still lots of unknowns,” said Aliotta. “And what the future will bring is going to be very interesting.”

Source: https://www.quantamagazine.org/physicists-pin-down-nuclear-reaction-from-moments-after-the-big-bang-20201111/

Continue Reading

Quantum

Computer Scientists Achieve ‘Crown Jewel’ of Cryptography

Avatar

Published

on

In 2018, Aayush Jain, a graduate student at the University of California, Los Angeles, traveled to Japan to give a talk about a powerful cryptographic tool he and his colleagues were developing. As he detailed the team’s approach to indistinguishability obfuscation (iO for short), one audience member raised his hand in bewilderment.

“But I thought iO doesn’t exist?” he said.

At the time, such skepticism was widespread. Indistinguishability obfuscation, if it could be built, would be able to hide not just collections of data but the inner workings of a computer program itself, creating a sort of cryptographic master tool from which nearly every other cryptographic protocol could be built. It is “one cryptographic primitive to rule them all,” said Boaz Barak of Harvard University. But to many computer scientists, this very power made iO seem too good to be true.

Computer scientists set forth candidate versions of iO starting in 2013. But the intense excitement these constructions generated gradually fizzled out, as other researchers figured out how to break their security. As the attacks piled up, “you could see a lot of negative vibes,” said Yuval Ishai of the Technion in Haifa, Israel. Researchers wondered, he said, “Who will win: the makers or the breakers?”

“There were the people who were the zealots, and they believed in [iO] and kept working on it,” said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley. But as the years went by, she said, “there was less and less of those people.”

Now, Jain — together with Huijia Lin of the University of Washington and Amit Sahai, Jain’s adviser at UCLA — has planted a flag for the makers. In a paper posted online on August 18, the three researchers show for the first time how to build indistinguishability obfuscation using only “standard” security assumptions.

All cryptographic protocols rest on assumptions — some, such as the famous RSA algorithm, depend on the widely held belief that standard computers will never be able to quickly factor the product of two large prime numbers. A cryptographic protocol is only as secure as its assumptions, and previous attempts at iO were built on untested and ultimately shaky foundations. The new protocol, by contrast, depends on security assumptions that have been widely used and studied in the past.

“Barring a really surprising development, these assumptions will stand,” Ishai said.

While the protocol is far from ready to be deployed in real-world applications, from a theoretical standpoint it provides an instant way to build an array of cryptographic tools that were previously out of reach. For instance, it enables the creation of “deniable” encryption, in which you can plausibly convince an attacker that you sent an entirely different message from the one you really sent, and “functional” encryption, in which you can give chosen users different levels of access to perform computations using your data.

The new result should definitively silence the iO skeptics, Ishai said. “Now there will no longer be any doubts about the existence of indistinguishability obfuscation,” he said. “It seems like a happy end.”

The Crown Jewel

For decades, computer scientists wondered if there is any secure, all-encompassing way to obfuscate computer programs, allowing people to use them without figuring out their internal secrets. Program obfuscation would enable a host of useful applications: For instance, you could use an obfuscated program to delegate particular tasks within your bank or email accounts to other individuals, without worrying that someone could use the program in a way it wasn’t intended for or read off your account passwords (unless the program was designed to output them).

But so far, all attempts to build practical obfuscators have failed. “The ones that have come out in real life are ludicrously broken, … typically within hours of release into the wild,” Sahai said. At best, they offer attackers a speed bump, he said.

In 2001, bad news came on the theoretical front too: The strongest form of obfuscation is impossible. Called black box obfuscation, it demands that attackers should be able to learn absolutely nothing about the program except what they can observe by using the program and seeing what it outputs. Some programs, Barak, Sahai and five other researchers showed, reveal their secrets so determinedly that they are impossible to obfuscate fully.

These programs, however, were specially concocted to defy obfuscation and bear little resemblance to real-world programs. So computer scientists hoped there might be some other kind of obfuscation that was weak enough to be feasible but strong enough to hide the kinds of secrets people actually care about. The same researchers who showed that black box obfuscation is impossible proposed one possible alternative in their paper: indistinguishability obfuscation.

On the face of it, iO doesn’t seem like an especially useful concept. Instead of requiring that a program’s secrets be hidden, it simply requires that the program be obfuscated enough that if you have two different programs that perform the same task, you can’t distinguish which obfuscated version came from which original version.

But iO is stronger than it sounds. For example, suppose you have a program that carries out some task related to your bank account, but the program contains your unencrypted password, making you vulnerable to anyone who gets hold of the program. Then — as long as there is some program out there that could perform the same task while keeping your password hidden — an indistinguishability obfuscator will be strong enough to successfully mask the password. After all, if it didn’t, then if you put both programs through the obfuscator, you’d be able to tell which obfuscated version came from your original program.

Over the years, computer scientists have shown that you can use iO as the basis for almost every cryptographic protocol you could imagine (except for black box obfuscation). That includes both classic cryptographic tasks like public key encryption (which is used in online transactions) and dazzling newcomers like fully homomorphic encryption, in which a cloud computer can compute on encrypted data without learning anything about it. And it includes cryptographic protocols no one knew how to build, like deniable or functional encryption.

“It really is kind of the crown jewel” of cryptographic protocols, said Rafael Pass of Cornell University. “Once you achieve this, we can get essentially everything.”

In 2013, Sahai and five co-authors proposed an iO protocol that splits up a program into something like jigsaw puzzle pieces, then uses cryptographic objects called multilinear maps to garble the individual pieces. If the pieces are put together correctly, the garbling cancels out and the program functions as intended, but each individual piece looks meaningless. The result was hailed as a breakthrough and prompted dozens of follow-up papers. But within a few years, other researchers showed that the multilinear maps used in the garbling process were not secure. Other iO candidates came along and were broken in their turn.

“There was some worry that maybe this is just a mirage, maybe iO is simply impossible to get,” Barak said. People started to feel, he said, that “maybe this whole enterprise is doomed.”

Hiding Less to Hide More

In 2016, Lin started exploring whether it might be possible to get around the weaknesses of multilinear maps by simply demanding less of them. Multilinear maps are essentially just secretive ways of computing with polynomials — mathematical expressions made up of sums and products of numbers and variables, like 3xy + 2yz2. These maps, Jain said, entail something akin to a polynomial calculating machine connected to a system of secret lockers containing the values of the variables. A user who drops in a polynomial that the machine accepts gets to look inside one final locker to find out whether the hidden values make the polynomial evaluate to 0.

For the scheme to be secure, the user shouldn’t be able to figure out anything about the contents of the other lockers or the numbers that were generated along the way. “We would like that to be true,” Sahai said. But in all the candidate multilinear maps people could come up with, the process of opening the final locker revealed information about the calculation that was supposed to stay hidden.

Since the proposed multilinear map machines all had security weaknesses, Lin wondered if there was a way to build iO using machines that don’t have to compute as many different kinds of polynomials (and therefore might be easier to build securely). Four years ago, she figured out how to build iO using only multilinear maps that compute polynomials whose “degree” is 30 or less (meaning that every term is a product of at most 30 variables, counting repeats). Over the next couple of years, she, Sahai and other researchers gradually figured out how to bring the degree down even lower, until they were able to show how to build iO using just degree-3 multilinear maps.

On paper, it looked like a vast improvement. There was just one problem: From a security standpoint, “degree 3 was actually as broken” as the machines that can handle polynomials of every degree, Jain said.

The only multilinear maps researchers knew how to build securely were those that computed polynomials of degree 2 or less. Lin joined forces with Jain and Sahai to try to figure out how to construct iO from degree-2 multilinear maps. But “we were stuck for a very, very long time,” Lin said.

“It was kind of a gloomy time,” Sahai recalled. “There’s a graveyard filled with all the ideas that didn’t work.”

Eventually, though — together with Prabhanjan Ananth of the University of California, Santa Barbara and Christian Matt of the blockchain project Concordium — they came up with an idea for a sort of compromise: Since iO seemed to need degree-3 maps, but computer scientists only had secure constructions for degree-2 maps, what if there was something in between — a sort of degree-2.5 map?

The researchers envisioned a system in which some of the lockers have clear windows, so the user can see the values contained within. This frees the machine from having to protect too much hidden information. To strike a balance between the power of higher-degree multilinear maps and the security of degree-2 maps, the machine is allowed to compute with polynomials of degree higher than 2, but there’s a restriction: The polynomial must be degree 2 on the hidden variables. “We’re trying to not hide as much” as in general multilinear maps, Lin said. The researchers were able to show that these hybrid locker systems can be constructed securely.

But to get from these less powerful multilinear maps to iO, the team needed one last ingredient: a new kind of pseudo-randomness generator, something that expands a string of random bits into a longer string that still looks random enough to fool computers. That’s what Jain, Lin and Sahai have figured out how to do in their new paper. “There was a wonderful last month or so where everything came together in a flurry of phone calls,” Sahai said.

The result is an iO protocol that finally avoids the security weaknesses of multilinear maps. “Their work looks absolutely beautiful,” Pass said.

The scheme’s security rests on four mathematical assumptions that have been widely used in other cryptographic contexts. And even the assumption that has been studied the least, called the “learning parity with noise” assumption, is related to a problem that has been studied since the 1950s.

There is likely only one thing that could break the new scheme: a quantum computer, if a full-power one is ever built. One of the four assumptions is vulnerable to quantum attacks, but over the past few months a separate line of work has emerged in three separate papers by Pass and other researchers offering a different potential route to iO that might be secure even from quantum attacks. These versions of iO rest on less established security assumptions than the ones Jain, Lin and Sahai used, several researchers said. But it is possible, Barak said, that the two approaches could be combined in the coming years to create a version of iO that rests on standard security assumptions and also resists quantum attacks.

Jain, Lin and Sahai’s construction will likely entice new researchers into the field to work on making the scheme more practical and to develop new approaches, Ishai predicted. “Once you know that something is possible in principle, it makes it psychologically much easier to work in the area,” he said.

Computer scientists still have much work to do before the protocol (or some variation on it) can be used in real-world applications. But that is par for the course, researchers said. “There’s a lot of notions in cryptography that, when they first came out, people were saying, ‘This is just pure theory, [it] has no relevance to practice,’” Pass said. “Then 10 or 20 years later, Google is implementing these things.”

The road from a theoretical breakthrough to a practical protocol can be a long one, Barak said. “But you could imagine,” he said, “that maybe 50 years from now the crypto textbooks will basically say, ‘OK, here is a very simple construction of iO, and from that we’ll now derive all of the rest of crypto.’”

Correction: November 10, 2020

A previous mobile version of the “Degrees of Obfuscation” graphic was incorrectly labeled. Degree-2.5 maps can be built securely and can be used to make iO, as correctly labeled on the desktop version.

Source: https://www.quantamagazine.org/computer-scientists-achieve-crown-jewel-of-cryptography-20201110/

Continue Reading

Quantum

Inside the Secret Math Society Known Simply as Nicolas Bourbaki

Avatar

Published

on

Antoine Chambert-Loir’s initiation into one of math’s oldest secret societies began with a phone call. “They told me Bourbaki would like me to come and see if I’d work with them,” he said.

Chambert-Loir accepted, and for a week in September 2001 he spent seven hours a day reading math texts out loud and discussing them with the members of the group, whose identities are unknown to the rest of the world.

He was never officially asked to join, but on the last day he was given a long-term task — to finish a manuscript the group had been working on since 1975. When Chambert-Loir later received a report on the meeting he saw that he was listed as a “membrifié,” indicating he was part of the group. Ever since, he’s helped advance an almost Sisyphean tradition of math writing that predates World War II.

The group is known as “Nicolas Bourbaki” and is usually referred to as just Bourbaki. The name is a collective pseudonym borrowed from a real-life 19th-century French general who never had anything to do with mathematics. It’s unclear why they chose the name, though it may have originated in a prank played by the founding mathematicians as undergraduates at the École Normale Supérieure (ENS) in Paris.

“There was some custom to play pranks on first-years, and one of those pranks was to pretend that some General Bourbaki would arrive and visit the school and maybe give a totally obscure talk about mathematics,” said Chambert-Loir, a mathematician at the University of Paris who has acted as a spokesperson for the group and is its one publicly identified member.

Bourbaki began in 1934, the initiative of a small number of recent ENS alumni. Many of them were among the best mathematicians of their generation. But as they surveyed their field, they saw a problem. The exact nature of that problem is also the subject of myth.

In one telling, Bourbaki was a response to the loss of a generation of mathematicians to World War I, after which the group’s founders wanted to find a way to preserve what math knowledge remained in Europe.

“There is a story that young French mathematicians were not seen as a government priority during [the] First World War and many were sent to war and died there,” said Sébastien Gouëzel of the University of Rennes, who is not publicly identified with the group but, like many mathematicians, is familiar with its activities.

In a more prosaic but probably also more likely rendering, the original Bourbaki members were simply dissatisfied with the field’s textbooks and wanted to create better ones. “I think at the beginning it was just for that very concrete matter,” Chambert-Loir said.

Whatever their motivation, the founders of Bourbaki began to write. Yet instead of writing textbooks, they ended up creating something completely novel: free-standing books that explained advanced mathematics without reference to any outside sources.

The first Bourbaki text was meant to be about differential geometry, which reflected the tastes of some of the group’s early members, luminaries like Henri Cartan and André Weil. But the project quickly expanded, since it’s hard to explain one mathematical idea without involving many others.

“They realized that if they wanted to do this cleanly, they needed [ideas from other areas], and Bourbaki grew and grew into something huge,” Gouëzel said.

The most distinctive feature of Bourbaki was the writing style: rigorous, formal and stripped to the logical studs. The books spelled out mathematical theorems from the ground up without skipping any steps — exhibiting an unusual degree of thoroughness among mathematicians.

“In Bourbaki, essentially, there are no gaps,” Gouëzel said. “They are super precise.”

But that precision comes at a cost: Bourbaki books can be hard to read. They don’t offer a contextualizing narrative that explains where concepts come from, instead letting the ideas speak for themselves.

“Essentially, you give no comment about what you do or why you do it,” Chambert-Loir said. “You state stuff and prove it, and that’s it.”

Bourbaki joined its distinctive writing style to a distinctive writing process. Once a member produces a draft, the group gathers in person, reads it aloud and suggests notes for revision. They repeat these steps until there is unanimous agreement that the text is ready for publication. It’s a long process that can take a decade or more to complete.

This focus on collaboration is also where the group’s insistence on anonymity comes from. They keep membership secret to reinforce the notion that the books are a pure expression of mathematics as it is, not an individual’s take on the topic. It’s also an ethic that can seem out of step with aspects of modern math culture.

“It’s sort of hard to imagine a group of young academics right now, people without permanent lifelong positions, devoting a huge amount of time to something they’ll never get credit for,” said Lillian Pierce of Duke University. “This group took this on in a sort of selfless way.”

Bourbaki quickly had an impact on mathematics. Some of the first books, published in the 1940s and ’50s, invented vocabulary that is now standard — terms like “injective,” “surjective” and “bijective,” which are used to describe properties of a map between two sets.

This was the first of two main periods in which Bourbaki was especially influential. The second came in the 1970s when the group published a series of books on Lie groups and Lie algebras that is “unanimously considered a masterpiece,” Chambert-Loir said.

Today, the influence of the group’s books has waned. It’s best known instead for the Bourbaki Seminars, a series of high-profile lectures on the most important recent results in math, held in Paris. When Bourbaki invited Pierce to give one in June 2017, she recognized that the talk would take a lot of time to prepare, but she also knew that due to the seminar’s status in the field, “it’s an invitation you have to accept.”

Even while organizing (and attending) the public lectures, members of Bourbaki don’t disclose their identities. Pierce recalls that during her time in Paris she went out to lunch “with a number of people who it seemed fair to assume were part of Bourbaki, but in the spirit of things I didn’t try to hear their last names.”

According to Pierce, the anonymity is maintained only in a “spirit of fun” these days. “There is no rigor to the secrecy,” she said.

Though its seminars are now more influential than its books, Bourbaki — which has about 10 members currently — is still producing texts according to its founding principles. And Chambert-Loir, 49, is nearing the end of his time with the group, since custom has it that members step down when they turn 50.

Even as he prepares to leave, the project he was handed at the end of his first week remains unfinished. “For 15 years I patiently typed it into LaTeX, made corrections, then we read everything aloud year after year,” he said.

It could easily be half a century from the time the work began to when it’s completed.  That’s a long time by modern publishing standards, where papers land online even in draft form. But then again, maybe it isn’t so long when the product is meant to stand forever.

Correction: November 9, 2020

Lillian Pierce gave her Bourbaki Seminar talk in June 2017, not July 2017. The article has been revised accordingly.

Source: https://www.quantamagazine.org/inside-the-secret-math-society-known-as-nicolas-bourbaki-20201109/

Continue Reading
NEWATLAS2 hours ago

Bottom of the barrel: Worst crowdfunding campaigns of 2020

Blockchain13 hours ago

Uncomfortable Truths of Trading And What to Watch Out For

Blockchain14 hours ago

Grayscale Bought Almost $140 Million in BTC in 24 Hours

Cleantech14 hours ago

Jeep & Electrify America Are Having Fun With “Monolith” Marketing

Blockchain News15 hours ago

MicroStrategy Adds Additional 2574 Bitcoins To its BTC Portfolio for $50 Million

Cleantech15 hours ago

Safety As The #1 Priority For Semi-Autonomous Vehicles Requires A Reform Of European Regulations

Cleantech17 hours ago

Aptera Shows Off Its Development Vehicle

NEWATLAS17 hours ago

Cirrus 620 pickup camper turns Ford F-150 into cozy micro-home

NEWATLAS18 hours ago

T-7A trainer jet’s “real-as-it-gets” flight simulator enters production

Cleantech19 hours ago

Tesla Owner Ticketed For Duct-Taping Christmas Lights To His Car

Blockchain19 hours ago

The OCC is Focused on “Not Killing” Bitcoin and Crypto says Acting Comptroller Brian Brooks

Blockchain News20 hours ago

US Treasury Department Warns Regulators of Potential Risks of Digital Assets

Globe NewsWire22 hours ago

Grupo Aeroportuario del Pacifico Reports Passenger Traffic Decrease of 34.4% for the Month of November

Cleantech22 hours ago

Mississippi Power Smart Neighborhood Will Feature Tesla Solar Roof & Powerwall

NEWATLAS1 day ago

Specialized releases its lightest-ever carbon bike – for kids

Cleantech1 day ago

Four Corners EV Charging: Utah & Colorado Are Leaving NM & Arizona Behind

Cleantech1 day ago

Aptera Announces First “Never Charge” Electric Vehicle

Cyber Security1 day ago

Making Sense of the Security Sensor Landscape

Cleantech1 day ago

Gayam Motor Works & Sokowatch Launch East Africa’s First Commercial Electric Tuk-Tuks

NEWATLAS1 day ago

Rubbery polymer could make for safer training of sniffer dogs

Cleantech1 day ago

The German Constitution May Protect A Right To Human Driving

Cyber Security1 day ago

High-Severity Chrome Bugs Allow Browser Hacks

Blockchain1 day ago

Kraken Exchange Now Allows Users to Stake ETH on Its Platform

Cleantech1 day ago

2021 Toyota RAV4 Prime Fails Moose Avoidance Test

Cyber Security1 day ago

Novel Online Shopping Malware Hides in Social-Media Buttons

Blockchain1 day ago

BIS And Swiss National Bank Announce Findings of CBDC Pilot Program

Blockchain1 day ago

The SEC director who first spoke about ‘sufficient decentralization’ served his last day today

SaaS1 day ago

Top 10 SaaStr Videos of the Week: MongoDB, Splash, Slack + Yammer, Gainsight and More!

Cleantech1 day ago

Supercell Technology From Cadenza Is Centerpiece Of New York Energy Storage Project — CleanTechnica Exclusive (Video)

SaaS1 day ago

How to Create PPC Campaigns for Real Estate Marketing

Trending