Connect with us

Quantum

A quantum walk down memory lane

Avatar

Published

on

In elementary and middle school, I felt an affinity for the class three years above mine. Five of my peers had siblings in that year. I carpooled with a student in that class, which partnered with mine in holiday activities and Grandparents’ Day revues. Two students in that class stood out. They won academic-achievement awards, represented our school in science fairs and speech competitions, and enrolled in rigorous high-school programs.

Those students came to mind as I grew to know David Limmer. David is an assistant professor of chemistry at the University of California, Berkeley. He studies statistical mechanics far from equilibrium, using information theory. Though a theorist ardent about mathematics, he partners with experimentalists. He can pass as a physicist and keeps an eye on topics as far afield as black holes. According to his faculty page, I discovered while writing this article, he’s even three years older than I. 

I met David in the final year of my PhD. I was looking ahead to postdocking, as his postdoc fellowship was fading into memory. The more we talked, the more I thought, I’d like to be like him.

I had the good fortune to collaborate with David on a paper published by Physical Review A this spring (as an Editors’ Suggestion!). The project has featured in Quantum Frontiers as the inspiration for a rewriting of “I’m a little teapot.” 

We studied a molecule prevalent across nature and technologies. Such molecules feature in your eyes, solar-fuel-storage devices, and more. The molecule has two clumps of atoms. One clump may rotate relative to the other if the molecule absorbs light. The rotation switches the molecule from a “closed” configuration to an “open” configuration.

These molecular switches are small, quantum, and far from equilibrium; so modeling them is difficult. Making assumptions offers traction, but many of the assumptions disagreed with David. He wanted general, thermodynamic-style bounds on the probability that one of these molecular switches would switch. Then, he ran into me.

I traffic in mathematical models, developed in quantum information theory, called resource theories. We use resource theories to calculate which states can transform into which in thermodynamics, as a dime can transform into ten pennies at a bank. David and I modeled his molecule in a resource theory, then bounded the molecule’s probability of switching from “closed” to “open.” I accidentally composed a theme song for the molecule; you can sing along with this post.

That post didn’t mention what David and I discovered about quantum clocks. But what better backdrop for a mental trip to elementary school or to three years into the future?

I’ve blogged about autonomous quantum clocks (and ancient Assyria) before. Autonomous quantum clocks differ from quantum clocks of another type—the most precise clocks in the world. Scientists operate the latter clocks with lasers; autonomous quantum clocks need no operators. Autonomy benefits you if you want for a machine, such as a computer or a drone, to operate independently. An autonomous clock in the machine ensures that, say, the computer applies the right logical gate at the right time.

What’s an autonomous quantum clock? First, what’s a clock? A clock has a degree of freedom (e.g., a pair of hands) that represents the time and that moves steadily. When the clock’s hands point to 12 PM, you’re preparing lunch; when the clock’s hands point to 6 PM, you’re reading Quantum Frontiers. An autonomous quantum clock has a degree of freedom that represents the time fairly accurately and moves fairly steadily. (The quantum uncertainty principle prevents a perfect quantum clock from existing.)

Suppose that the autonomous quantum clock constitutes one part of a machine, such as a quantum computer, that the clock guides. When the clock is in one quantum state, the rest of the machine undergoes one operation, such as one quantum logical gate. (Experts: The rest of the machine evolves under one Hamiltonian.) When the clock is in another state, the rest of the machine undergoes another operation (evolves under another Hamiltonian).

Physicists have been modeling quantum clocks using the resource theory with which David and I modeled our molecule. The math with which we represented our molecule, I realized, coincided with the math that represents an autonomous quantum clock.

Think of the molecular switch as a machine that operates (mostly) independently and that contains an autonomous quantum clock. The rotating clump of atoms constitutes the clock hand. As a hand rotates down a clock face, so do the nuclei rotate downward. The hand effectively points to 12 PM when the switch occupies its “closed” position. The hand effectively points to 6 PM when the switch occupies its “open” position.

The nuclei account for most of the molecule’s weight; electrons account for little. They flit about the landscape shaped by the atomic clumps’ positions. The landscape governs the electrons’ behavior. So the electrons form the rest of the quantum machine controlled by the nuclear clock.

Experimentalists can create and manipulate these molecular switches easily. For instance, experimentalists can set the atomic clump moving—can “wind up” the clock—with ultrafast lasers. In contrast, the only other autonomous quantum clocks that I’d read about live in theory land. Can these molecules bridge theory to experiment? Reach out if you have ideas!

And check out David’s theory lab on Berkeley’s website and on Twitter. We all need older siblings to look up to.

Source: https://quantumfrontiers.com/2020/07/26/a-quantum-walk-down-memory-lane/

Quantum

The 10 greatest predictions in physics

Avatar

Published

on

Taken from the January 2021 issue of Physics World. Members of the Institute of Physics can enjoy the full issue via the Physics World app.

Over the centuries there have been many theoretical physics predictions that have rocked our understanding of how the world works. David Appell highlights what he thinks are the top 10 of all time

Photos of Isaac Newton, Siméon-Denis Poisson, James Clerk Maxwell, Albert Einstein, Maria Goeppert Mayer, Julian Schwinger, Fred Hoyle, Chen-Ning Yang, Tsung-Dao Lee, Brian Josephson, Vera Rubin and W Kent Ford JrTheoretical physicists stare at blackboards, do calculations and make predictions. Experimental physicists build equipment, gather observations and analyse data sets. (At least, that’s how it goes at the best of times.)

The two groups are reliant on each other – experimentalists may be trying to prove a theory is right (or wrong), or perhaps theorists are trying to explain experimental observations. As the British theoretical physicist Arthur Eddington once wryly put it, “Experimentalists will be surprised to learn that we will not accept any evidence that is not confirmed by theory.”

But often, everyone is somewhat lost in a world of big ideas that cry out for clarity. It is only every once in a while that someone from one of these groups produces a piece of work that cuts through the murkiness, delivering a crystalline result that instantly advances their field, and sometimes even creates it.

In this article I have chosen what I think are the 10 greatest theoretical physics predictions of all time, presented in chronological order. Of course, any such list is somewhat arbitrary and depends on the author’s predilections, opinions and knowledge. Any reader will no doubt disagree with some, maybe all. We’d love to hear your own thoughts, comments and opinions, so get in touch at pwld@ioppublishing.org.

Kepler’s three laws, by Isaac Newton (by 1687)

British physicist and mathematician Isaac Newton was an early proponent of prediction through mathematical calculation. By creating his “fluxions” in 1665 – what we today call calculus (Gottfried Wilhelm Leibniz did so too independently at about the same time) – he made it possible to predict the motion of objects through space and time.

To do so, Newton took ideas from Galileo Galilei about force and acceleration, from Johannes Kepler and his three laws of planetary motion, and from Robert Hooke about how a planet’s tangential velocity compares to the radial force it experiences, with the gravitational force an inverse square law directed towards the Sun. Newton united all these notions and added ideas of his own to devise his three laws of motion and his universal law of gravity.

These four laws brought order to the study of the physical universe and, just as importantly, the mathematical tools to model it. In particular, Newton was able to derive Kepler’s three laws – which famously indicated that planets move in ellipses not circles – from pure mathematics, at the same time using them as a test bed for his various assumptions. For the first time straight mathematics allowed calculations about, and predictions of, the motions of celestial objects, the tides, the precession of the equinoxes and more, while making it at last clear that terrestrial and celestial phenomena were ruled by the same physical laws.

The Arago spot, by Siméon-Denis Poisson (1818)

The French mathematician and physicist Siméon-Denis Poisson once made a prediction he was convinced was wrong. Instead, his prediction about the prediction was wrong, and he had accidentally helped demonstrate that light was a wave.

In 1818 Poisson was among a number of scientists who proposed that the French Academy of Science’s yearly competition should be about the properties of light, expecting the entries to support Newton’s corpuscular theory – that light was made up of “corpuscles” (little particles). However, Augustin-Jean Fresnel – a French engineer and physicist – submitted an idea that built upon Christiaan Huygen’s hypothesis that light was a wave, with each point on its wavefront the source of secondary wavelets. Fresnal proposed that all these wavelets mutually interfered with one another.

Poisson’s chagrin The Arago spot can be seen at the centre of an interference pattern created by light from a point source diffracting around a circular object. The small bright spot demonstrates that light behaves as a wave. (CC BY SA Thomas Reisinger)”>Poisson’s chagrin The Arago spot can be seen at the centre of an interference pattern created by light from a point source diffracting around a circular object. The small bright spot demonstrates that light behaves as a wave. (CC BY SA Thomas Reisinger)”>

Poisson studied Fresnel’s theory in detail. He realized that Fresnel’s diffraction integrals implied that, at least for a point light source illuminating a disc or sphere, a bright spot would lie on the axis behind the disc. Poisson thought this was absurd as corpuscular theory clearly predicted there would be total darkness.

Poisson was so confident that, a version of the story goes, when the time came for the competition’s presentations, he stood up during Fresnel’s lecture and confronted him. François Arago – the mathematician and physicist who headed the competition’s committee – swiftly carried out the experiment in his laboratory with a flame, filters and a 2 mm metal disc attached to a glass plate with wax. To everyone’s surprise, and Poisson’s chagrin, Arago observed the predicted spot. Fresnel won the competition, and the speck has since been known as the Arago spot, Poisson spot or Fresnel spot.

Speed of light, by James Clerk Maxwell (1865)

In 1860 at King’s College London, UK, the Scottish physicist James Clerk Maxwell began to make deep progress in the fields of electricity and magnetism, converting the experimental ideas of Michael Faraday into mathematical form.

A series of publications culminated in the 1865 paper “A dynamical theory of the electromagnetic field” (Philosophical Transactions of the Royal Society of London 155 459). Here, Maxwell derived a set of 20 partial differential equations (they were not yet cast into the vector calculus notation familiar to us until Oliver Heaviside in 1884), alongside six wave equations, three for each spatial component of the electric field, E, and the magnetic field, B. Maxwell concluded that he could “scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena” – that is to say, he had predicted that light is an electromagnetic wave.

The wave (phase) speed, v, Maxwell derived was:

v=1με

where μ is a medium’s permeability and ε its permittivity. Maxwell took the permeability μ of air to be 1, and using a value of ε for air established by a charged capacitor experiment, Maxwell calculated that the speed of light in air is 310,740,000 m/s. He compared this favourably to Hippolyte Fizeau’s measured value of 314,858,000 m/s and Jean Léon Foucault’s 298,000,000 m/s, concluding his inference that light was an electromagnetic wave was correct.

Anomalous perihelion precession of Mercury, by Albert Einstein (1915)

In the 1840s the French astronomer Urbain Le Verrier carefully analysed the orbit of Mercury. He found that, instead of a precise ellipse as predicted by Newton’s laws, the perihelion of the planet’s elliptical orbit – its closest point to the Sun – is shifting around the Sun. The change is very slow, just 575 arcseconds per century, but astronomers at the time could only account for 532 arcseconds from interactions with other planets in the solar system, leaving 43 arcseconds unaccounted for.

The difference, however small, bothered astronomers. They proposed a range of solutions – an unseen planet, a near infinitesimal change to the exponent of 2 in Newton’s gravitational law, an oblate Sun – but everything seemed ad hoc. Then, in 1915, as he was finishing his general theory of relativity, the German theorist Albert Einstein was able to calculate the influence of curved space on Mercury’s orbit, deriving this additional shift of the perihelion precession as:

ε=24π3a2T2c2(1e2)

where a is the semimajor axis of the planet’s ellipse, T its period, e its eccentricity and c the speed of light.

For Mercury, this comes to exactly 43 arcseconds per century, precisely the missing amount. While strictly speaking this was a postdiction, it was nonetheless impressive. “Can you imagine my joy,” Einstein wrote to Paul Ehrenfest that year, “with the result that the equations of the perihelion movement of Mercury prove correct? I was speechless for several days with excitement.”

Second series of rare-earth elements, by Maria Goeppert Mayer (1941)

It’s not every day someone adds a new element to the periodic table, but German physicist Maria Goeppert Mayer went one step further and added an entire row.

While at Columbia University in the US – where she had to work without a salary because her husband was employed there – Mayer met Enrico Fermi and Harold Urey. Fermi was trying to puzzle out the decay products of uranium and elements that might lie beyond it, as element 93, neptunium, had just been discovered by Edwin McMillian and Philip Abelson. Fermi asked Goeppert Mayer to calculate the eigenfunctions of Erwin Schrödinger’s equation for the 5f electron orbitals of atoms near uranium (atomic number Z = 92) using the Thomas–Fermi model for the potential energy – a numerical statistical model developed independently by Llewellyn Thomas and Fermi in 1927 to approximate the distribution of electrons in high-Z atoms.

Numerically solving Schrödinger’s equation with the Thomas–Fermi potential for the radial eigenfunctions, Goeppert Mayer found the f orbitals start to be filled at critical values of Z (Z = 59 for 4f, and Z = 91 or 92 for 5f), with inaccuracies of a few units of Z expected due to the statistical nature of the model. At these critical values the atom ceases strong participation in chemical reactions. Mayer’s prediction verified Fermi’s suggestion that any elements beyond uranium were chemically similar to the already known rare-earth elements, thereby predicting the transuranic row. Goeppert Mayer would later share the 1963 Nobel Prize for Physics for development of the nuclear shell model.

Anomalous magnetic moment of the electron, by Julian Schwinger (1949)

During the Second World War, American theoretical physicist Julian Schwinger worked on wartime radar and waveguide technology, where he developed methods based on Green’s functions – a way of solving complicated differential equations by solving a simpler one giving the Green’s function, which can then be integrated to give the solution to the original. Complex in practice, it often can only be done perturbatively, but Schwinger was a master.

After the war, Schwinger turned his skill with Green’s functions to the pressing physics of the day, quantum electrodynamics (QED) – the interactions of electrons and light. After the work of Schrödinger and Paul Dirac, theorists now needed to include the self-interactions of the quantum, relativistic electron and photon fields to obtain the fine details of their behaviour. But calculations gave nasty infinities for measurable quantities like mass and charge. Schwinger was the first to hack through at least some of the mathematical minefields by using Green’s functions, and in a 1947 paper he gave his result for the so-called first-order radiative correction to the electron’s magnetic moment. His full theory culminated in a 1949 paper, with pages of dense equations predicting the first-order correction to be:

δμ=(α2π)μ0

where α is the fine-structure constant (≈ 1/137) and μ0 the electron’s classical magnetic moment. This was quickly confirmed by experiment, and today the fraction α/2π resides atop Schwinger’s tombstone.

The establishment of QED – the most precise theory in science, whose fifth-order prediction for δμ for the electron has now been experimentally verified to 3 parts in 1013 – is important for the understanding of lasers, quantum computing and Mössbauer spectroscopy, and is the prototype on which the Standard Model of elementary particle physics is based on. Richard Feynman called QED “the jewel of physics”.

7.65 MeV energy level in carbon-12, by Fred Hoyle (1953)

In 1953 English astronomer Fred Hoyle made a prediction that he realized later in life was required because he, and all life, existed.

In the 1930s scientists such as Hans Bethe had established that stars get their energy from the fusion of atomic nuclei – of protons (hydrogen ions) into helium nuclei (alpha particles), then pairs of these into beryllium-8 (8Be). Beyond that process, scientists had figured out that nitrogen, oxygen and other nuclei formed from carbon-12 (12C). However, no-one knew how 12C arose from the unstable 8Be nucleus. The full path of how the elements arose from burning within stars or after the Big Bang were a mystery, yet 12C is all around us.

While the highly unstable 8Be nuclei would quickly decay back into two alpha particles, calculations proposing that three alpha particles combine to form 12C seemed to be ruled out, as the reaction’s probability is too low to explain the amount of carbon produced. However, Hoyle boldly predicted a new energy level in 12C, at 7.65 MeV above its ground state. This excited 12C state, known as the “Hoyle state”, was at just the right resonance to have been formed by 8Be reacting with an alpha particle. While the Hoyle state nearly always decays back into three alpha particles, on average once in 2421.3 decays it goes to 12C’s ground state, giving off the extra energy as gamma rays. The 12C atoms then either stay as they are or fuse with an alpha particle to make oxygen, and so on up the chain. When the star explodes in a supernova, carbon and other nuclei cool into atoms and populate the universe.

Some months after, an experimental group at the California Institute of Technology, led by Ward Wahling, found such a 12C state at 7.68 ± 0.03 MeV by doing magnetic analysis of the alpha particle spectrum from nitrogen-14 decay as they impacted 12C, thereby proving that Hoyle had correctly predicted the origin of one of the most important elements in the universe.

Parity violation in the weak interaction, by Tsung-Dao Lee and Chen-Ning Yang (1957)

Parity conservation – the idea that the world looks and behaves the same way whether viewed in a mirror or not – had been firmly established for electromagnetic and strong interactions by the 1950s. Almost all physicists expected the same to be true of the weak force. However, some decays of particles called kaons could not be explained using existing theories if parity conservation were true. The Chinese-American theorists Tsung-Dao Lee and Chen-Ning Yang therefore decided to look more closely at the experimental evidence for parity conservation in the known results of weak interaction physics. Surprisingly, they found none.

Parity violation To test Tsung-Dao Lee and Chen-Ning Yang’s theory, Chien-Shiung Wu looked at the emission of beta rays from cobalt-60 nuclei. It was first found that electron emission was concentrated downward relative to the particle’s spin. When the magnetic field, B, was reversed to change the spin direction, rather than seeing a mirror image of emission (a), they found that there were more electrons going upwards (b) – thereby proving parity violation for weak interactions.”>Parity violation To test Tsung-Dao Lee and Chen-Ning Yang’s theory, Chien-Shiung Wu looked at the emission of beta rays from cobalt-60 nuclei. It was first found that electron emission was concentrated downward relative to the particle’s spin. When the magnetic field, B, was reversed to change the spin direction, rather than seeing a mirror image of emission (a), they found that there were more electrons going upwards (b) – thereby proving parity violation for weak interactions.”>

As a result, the pair formulated a theory that left–right symmetry is violated by the weak interaction. Working with experimentalist Chien-Shiung Wu, they devised several experiments to look at different particle decays that proceeded via the weak force. Wu got on the case straight away, and by testing the properties of beta decay in cobalt-60, she observed an asymmetry that indicated parity violation and therefore confirmed Lee and Yang’s prediction.

Lee and Yang won the 1957 Nobel Prize for Physics for their prediction only 12 months after their paper was published, one of the quickest Nobel prize awards in history. Wu, however, did not share in the prize despite confirming the theory, an oversight that has only grown more controversial as time has passed.

Josephson effect, by Brian Josephson (1962)

The 1977 Nobel-prize-winning physicist Phillip Anderson once recalled teaching Brian Josephson as a graduate student at the University of Cambridge: “This was a disconcerting experience for a lecturer, I can assure you, because everything had to be right or he would come up and explain it to me after class.”

But because of this relationship, Josephson was quick to show Anderson calculations he had made about two superconductors separated by a thin insulating layer or a short section of non-superconducting metal. He predicted that a “DC supercurrent” composed of pairs of electrons (Cooper pairs) could quantum tunnel from one superconductor to another, right through the barrier – an example of a macroscopic quantum effect.

Josephson calculated the form of the current and phase rate of change for such a junction to be:

J = J1 sin (ΔΦ)

ddt(ΔΦ)=2eVh

where J1 is a parameter of the insulating junction called the critical current and thus J is a dissipationless current. Φ is the phase difference between the Cooper pair wave functions on opposite sides of the barrier, e is that charge on an electron and V the potential difference between the superconductors.

Experimental observation of the DC tunnelling current appeared in print about nine months later by Anderson and John Rowell of Bell Telephone Laboratories (now Nokia Bell Labs), and Josephson would go on to win the 1973 Nobel prize for his prediction. Josephson junctions are now used in a variety of applications, such as in DC and AC electronic circuits, and to build SQUIDs (superconducting quantum interference devices) – technology that can be used as extremely sensitive magnetometers and voltmeters, as qubits for quantum computing, and more.

Dark matter, by Vera Rubin with W Kent Ford Jr (1970)

“Great astronomers told us it didn’t mean anything,” the American astronomer Vera Rubin once told an interviewer.

She was talking about her and Kent Ford Jr’s 1970 observation that outer stars orbiting in the Andromeda galaxy were all doing so at the same speed. They were told to look at more spiral galaxies; the effect persisted. The galaxies’ rotation curves (the plot of orbital speed of visible stars within the galaxy versus their radial distance to the galaxy centre) were “flat”, in seeming contradiction to Kepler’s law. More alarming still, stars near the outer edges of the galaxies were orbiting so fast they should be falling apart.

Spinning too fast Vera Rubin and Kent Ford Jr’s observation that the outer stars in a spiral galaxy – like NGC 1232 here – were orbiting at the same speed, led them to predict dark matter. (Courtesy: ESO)”>Spinning too fast Vera Rubin and Kent Ford Jr’s observation that the outer stars in a spiral galaxy – like NGC 1232 here – were orbiting at the same speed, led them to predict dark matter. (Courtesy: ESO)”>

Rubin led a team in which Ford built new observational instruments – in particular an advanced spectrometer based on an electronic photomultiplier tube that allowed their precise astronomical observations to be captured in digital form for analysis.

Rubin and Ford Jr’s observation led them to predict that there was some mass inside the galaxies responsible for the anomalous motions, something their telescopes couldn’t see but was there in quantities about six times the amount of the luminous matter present. First called “missing mass” after a suggestive study by Swiss astronomer Fritz Zwicky of the Coma galaxy cluster back in 1933, Rubin and Ford had provided the first strong evidence for what today we know as “dark matter”, since it does not even emit photons. Calculations of temperature fluctuations in the cosmic microwave background, using the standard ΛCDM model of cosmology, reveal that the total mass-energy of the universe is 5% ordinary matter and energy, 27% dark matter and 68% dark energy. While a full 85% of the matter in the universe is non-luminous, it is still a mystery to us today and there are many experiments now trying to identify it.

Source: https://physicsworld.com/a/the-10-greatest-predictions-in-physics/

Continue Reading

Quantum

Intertwined entities: sci-fi anthology explores the impact of AI on human relationships

Avatar

Published

on

Taken from the January 2021 issue of Physics World. Members of the Institute of Physics can enjoy the full issue via the Physics World app.

Illustration of people networked together“Science gives birth to technology, and technology gives birth to societal change. And it’s the societal change, especially ethical aspects of that, that interests me,” says Hugo- and Nebula- award-winning sci-fi author Nancy Kress. The quote features in an interview with Kress by Georgia Tech professor of science-fiction studies Lisa Yaszek, in the fascinating new book Entanglements: Tomorrow’s Lovers, Families, and Friends, an anthology of original sci-fi short stories about artificial intelligence (AI). For Kress, while the science is fascinating, it only makes for a good narrative when she can explore its impact on people. “Because stories are made out of and for people.”

The many facets of AI – from machine learning and virtual reality, to deep learning and neural networks – are becoming heavily intertwined in physics, whether it’s using AI to do better physics, or using physics to build better AI. There are countless new research papers on the subject, from the applications of machine learning in materials discovery to the plethora of applications in medical imaging and diagnostics. As we are (nearly) poised on the brink of a quantum-computing revolution, the AI one is (almost) already here, with all its opportunities and obstacles. But perhaps what we don’t talk about as much is the impact this ultramodern tech will undoubtedly have on human relationships, which are often dominated by emotion and not cold hard logic.

We don’t talk about the impact that AI will undoubtedly have on human relationships, which are dominated by emotion and not cold hard logic

AI and sci-fi also have a long and interlinked relationship. Indeed, the word “robot” was first used to denote a fictional artificially intelligent humanoid in the 1920 play Rossum’s Universal Robots by Karel Čapek, shortly followed by Isaac Asimov’s Robot series of short stories, in which he developed the Three Laws of Robotics. But often these stories focus on dystopian worlds that seem far from our reality. In Entanglements the stories all explore a futuristic world where human and machine are more closely linked than ever, focusing on the emotional and artificial overlap as AI evolves and grows.

Consummate sci-fi readers will be pleased to know that the collection was put together by Sheila Williams, who is editor of Asimov’s Science Fiction magazine, and also has a couple of Hugo awards under her belt. Part of MIT Press’s Twelve Tomorrows series, the book consists of a dozen tales by well-known authors in the field including the likes of Sam J Miller, Suzanne Palmer and Xia Jia (translated by Ken Liu). Entanglements also includes a number of specially commissioned artworks by Tatiana Plakhova, which she describes as “infographic abstracts” and perfectly add to the weird, wonderful and complex stories.

Kress is the featured author in this anthology, and the opening tale is her story “Invisible People”, which attempts to deal with a number of ethically complex topics including genetic alteration, adoption, governmental control and, indeed, even individualism versus altruism. While Kress is undoubtedly a formidable writer, and her story is a fascinating read, I feel that she spends too long in setting up a complex backstory, and then rushes the story’s ending, ambiguous though it is. Despite this, it left me pondering many an ethical dilemma, and I enjoyed the longer interview with her that followed the tale.

A short and sharp story that I particularly like is Palmer’s “Don’t Mind Me”, which explores the always-ripe intersection between censorship and technology – only this time using an implant in the (literal) minds of children. While this is a tried and tested concept, Palmer has a fresh take – the implant in the children is used by parents to control everything their offspring see and learn in school, thereby perfectly passing on their biases. Topics deemed unfit (be it Roman history or Maya Angelou’s works) are automatically deleted from children’s memories, making it virtually impossible for the protagonist to pass high school, not to mention have any free-thinking opinions of his own.

I also enjoyed Jia’s “The Monk of Lingyin Temple”, which explores faith and science; while Rich Larson’s “Echo the Echo” is equal parts funny and heart-breaking.

My favourite story in the collection though is undoubtedly Mary Robinette Kowal’s “A Little Wisdom”, a lovely and sweet story that highlights the many ways in which AI could truly benefit humankind, while also realistically pointing out some potential issues. The slice-of-life story follows an elderly art historian and her robot support dog (she suffers from Parkinson’s disease) through what begins as a regular work day, but soon morphs into an emergency thanks to a tornado. The warm and cheering tale deftly interweaves technology and art, and the positive impact they have on human beings, especially when afraid. It left me feeling optimistic about the future, even one with AI overlords.

For those who are fans of science fiction as it applies to human beings on Earth, and enjoy humorous and ominous offerings such as Charlie Brooker’s TV series Black Mirror, this is a book to add to your reading list and later discuss with your book club. Oh, and Netflix: if you’re listening, I’m waiting for the mini-series.

  • 2020 MIT Press 240pp $19.95

Source: https://physicsworld.com/a/intertwined-entities-sci-fi-anthology-explores-the-impact-of-ai-on-human-relationships/

Continue Reading

Quantum

Perovskite sensor sees more like the human eye

Avatar

Published

on

retinomorphic sensor
Bio-inspired design for photosensitive perovskite-based capacitors could enable light sensors that respond only to movement. Published in: Matthew Ishimaru; Scilight 2020, 501106 (2020) DOI: 10.1063/10.0002944. Copyright © 2020 Author(s)

A new type of sensor that closely mimics how the human eye responds to changing visual stimuli could become the foundation for next-generation computer processors used in image recognition, autonomous vehicles and robotics. The so-called “retinomorphic” device is made from a class of semiconducting materials known as perovskites, and unlike a conventional camera, it is sensitive to changes in levels of illumination rather than the intensity of the input light.

The eyes of humans and other mammals are incredibly complex organs. Our retinas, for example, contain roughly 10photoreceptors, yet our optical nerves only transmit about 106 signals to the primary visual cortex – meaning that the retina does a lot of pre-processing before it transmits information.

Part of this pre-processing relates to how the eye treats moving objects. When our field of view is static, our retinal cells are relatively quiet. Expose them to spatially or temporally varying signals, however, and their activity shoots up. This selective response – transmitting signals only in response to change – enables the retina to substantially compress the information it passes on.

Mimicking mammalian visual processing

In recent years, this mammalian optical sensing process has caught the attention of computer scientists. Traditional computer processors – known as von Neumann machines after the mathematical physicist John von Neumann, who pioneered their development in the mid-20th century – deal with input instructions in a sequential fashion. In contrast, the mammalian brain processes inputs via massively parallel networks, and studies have shown that computers that follow suit – neuromorphic computers – should outperform von Neumann machines for certain machine-learning tasks in terms of both speed and power consumption.

Retinomorphic sensors – optical devices that attempt to mimic mammalian visual processing – are a potential building block for such computers, and thin-film semiconductors such as metal halide perovskites are considered good candidates for making them. Materials of this type are attractive because they can be tuned to absorb light over a wide range of wavelengths. They have also already proved themselves in artificial synapses that react to light, albeit in structures that are generally designed for transmitting and processing information rather than for optical sensing. However, while researchers have previously used perovskites to make optical sensors that mimic the geometry of the eye, the fundamental operating mode of these sensors still requires sequential processing.

Spiking sensor

A team led by John Labram at Oregon State University in the US has now shown that a simple photosensitive capacitor can reproduce some characteristics of mammalian retinas. The new device is made from a double-layer dielectric: the bottom layer, silicon dioxide, is highly insulating and hardly responds to light, while the top layer is the light-sensitive perovskite methylammonium lead iodide (MAPbI3).

The team found that the capacitance of this MAPbI3-silicon dioxide bilayer changes dramatically when exposed to light. When Labram and his student Cinthya Trujillo Herrera placed it in series with an external resistor and exposed it to a light source, they observed a large voltage spike across the resistor. Unlike in a normal camera or photodiode, however, this voltage spike quickly decayed away even though the intensity of the light remained constant. The result is a sensor that responds, like the retina, to changes in light levels rather than the intensity of the light.

Filtering out unimportant information

After measuring the light response of several such devices, the team developed a numerical model based on Kirchoff’s laws to show how the devices would behave if they were arranged in arrays. This model enabled them to simulate an array of retinomorphic sensors and predict how a retinomorphic video camera would react to different types of input stimuli. One of their tests involved analysing footage of a bird flying into view. When the bird stopped at a (static, and therefore invisible to the sensor) bird feeder, it all but disappeared. Once the bird took off, it reappeared – and, in the process, revealed the presence of the feeder, which became visible to the sensor only when the bird’s take-off set it swaying.

“The new design thus inherently filters out unimportant information, such as static images, providing a voltage solely in response to movement,” Labram explains. “This behaviour reasonably reflects optical sensing in mammals.”

The researchers, who report their work in Appl. Phys. Lett., say they now plan to better understand the fundamental physics of these devices and how their signals would be interpreted by image-recognition algorithms. They also hope to address some of the challenges associated with scaling these devices up to sensor arrays. “Going from a brand-new device paradigm to a working array is almost certainly going to expose challenges we haven’t yet considered,” Labram tells Physics World. “There are also quite a few operation-related questions we will have to answer — in particular as regards performance limits, stability, predictability and device-to-device variability.”

Source: https://physicsworld.com/a/perovskite-sensor-sees-more-like-the-human-eye/

Continue Reading

Quantum

How to cool ion beams using electron pulses

Avatar

Published

on

CSRm ion storage ring
Pulsed testbed: the CSRm ion storage ring in Lanzhou, China. (Courtesy: IMP)

Physicists in the US and China have studied how a pulsed beam of electrons can be used to cool a beam of high-energy ions – a task that is normally done by a continuous beam of electrons. Researchers led by Max Bruker at the Thomas Jefferson National Accelerator Facility in the US, alongside a team at the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences, modified a continuous-beam electron cooling system to operate in pulsed mode. Their results suggest that it should be possible to cool much higher energy ion beams using pulsed electron beams – which is good news for physicists designing the next generation of ion storage rings.

Storage-ring facilities that accelerate and store beams of protons and ions at low to medium energies use a technique called “electron cooling” to prevent their beams from degrading. This involves merging the ions with a beam of electrons, with both beams moving at roughly the same speed. Over time, the ions exchange momentum with the electrons until equilibrium is reached. This cools the ions down, preventing them from straying away from the centre of the beam.

Normally, this is done using this continuous electron beams with energies as high as 4.3 MeV. However, technological limitations on using static electric fields to accelerate electrons mean that creating continuous electron beams at higher energies is extremely difficult. This poses a challenge to the designers of future storage rings such as the US’s Electron Ion Collider, which will require electron beams as energetic as 50 MeV or more.

RF fields

To reach higher energies, electron beams are accelerated using radio-frequency (RF) fields, which results in a pulsed beam. Recently, the first pulsed electron cooling system has been installed at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory in the US – operating at a modest electron energies of about 2 MeV.

Studies using computer simulations suggest that the cooling effects of pulsed and continuous electron beams are different – and therefore it is important that pulsed cooling be studied experimentally before it is implemented in higher-energy, next-generation facilities.

Physicists at Jefferson Lab and IMP first teamed up in 2012 to study how pulsed electron beams could be used for cooling. Between 2016-2019, they performed four pulsed-beam cooling experiments at the CSRm storage ring at the IMP’s Heavy Ion Research Facility in Lanzhou. Instead of using an RF system to accelerate cooling electrons, they modified an existing continuous-beam system to deliver pulses of electrons. The researchers then measured how the profile of the cooled ion beam evolved over time, both in transverse and longitudinal directions.

Crucially, the teams’ experiments revealed that ions can be lost through transverse heating caused by uneven electron bunch lengths, highlighting the need for electron bunches with highly stable properties. Yet if bunch timings and lengths can be reliably maintained, the dynamics of the ion beams they interact with will not be adversely affected by their non-continuous nature. The results now pave the way for a new generation of ion ring facilities, capable of cooling ion beams at higher energies than were ever previously possible.

The research is described in Physical Review Accelerators and Beams.

Source: https://physicsworld.com/a/how-to-cool-ion-beams-using-electron-pulses/

Continue Reading
Amb Crypto4 days ago

Ethereum, Dogecoin, Maker Price Analysis: 15 January

Amb Crypto4 days ago

How are Chainlink’s whales propping up its price?

Amb Crypto4 days ago

NavCoin releases its new privacy protocol, one day after Binance adds NAV to its staking program

Blockchain4 days ago

Litecoin Regains Footing After Being Knocked Back by Resistance

Blockchain4 days ago

Bitcoin Cloud Mining With Shamining: Is it Worth it? [Review]

Blockchain4 days ago

Warp Finance Relaunches With ‘Additional Security’ from Chainlink

Blockchain3 days ago

The Countdown is on: Bitcoin has 3 Days Before It Reaches Apex of Key Formation

Blockchain2 days ago

Litecoin, VeChain, Ethereum Classic Price Analysis: 17 January

Blockchain2 days ago

Is Ethereum Undervalued, or Polkadot Overvalued?

Cannabis4 days ago

Subversive Capital Acquisition Corp. Closes The Largest Cannabis SPAC In History

SPACS3 days ago

Affinity Gaming’s SPAC Gaming & Hospitality Acquisition files for a $150 million IPO

Blockchain3 days ago

Here’s why Bitcoin or altcoins aren’t the best bets

ZDNET4 days ago

SAP’s Q4 shows improvement, adds Microsoft Azure alum as marketing chief

Blockchain2 days ago

Bitcoin Worth $140 Billion Lost Says UK Council

Blockchain2 days ago

Data Suggests Whales are Keen on Protecting One Key Bitcoin Support Level

AI5 days ago

Apologetic AI Is A Somewhat Sorry Trend, Especially For Autonomous Cars  

Blockchain3 days ago

Chainlink Futures OI follows asset’s price to hit ATH

AI5 days ago

AI Research at Amazon: Brand Voice, Entanglement Frontier, Humor Recognition  

Cyber Security4 days ago

Apple Kills MacOS Feature Allowing Apps to Bypass Firewalls

Blockchain4 days ago

Moonstake Wallet officially commences staking support for QURAS

Trending