What is quantum cognition? Physics theory could predict human behavior. – Livescience.com

The same fundamental platform that allows Schrdinger's cat to be both alive and dead, and also means two particles can "speak to each other" even across a galaxy's distance, could help to explain perhaps the most mysterious phenomena: human behavior.

Quantum physics and human psychology may seem completely unrelated, but some scientists think the two fields overlap in interesting ways. Both disciplines attempt to predict how unruly systems might behave in the future. The difference is that one field aims to understand the fundamental nature of physical particles, while the other attempts to explain human nature along with its inherent fallacies.

"Cognitive scientists found that there are many 'irrational' human behaviors," Xiaochu Zhang, a biophysicist and neuroscientist at the University of Science and Technology of China in Hefei, told Live Science in an email. Classical theories of decision-making attempt to predict what choice a person will make given certain parameters, but fallible humans don't always behave as expected. Recent research suggests that these lapses in logic "can be well explained by quantum probability theory," Zhang said.

Related: Twisted Physics: 7 Mind-Blowing Findings

Zhang stands among the proponents of so-called quantum cognition. In a new study published Jan. 20 in the journal Nature Human Behavior, he and his colleagues investigated how concepts borrowed from quantum mechanics can help psychologists better predict human decision-making. While recording what decisions people made on a well-known psychology task, the team also monitored the participants' brain activity. The scans highlighted specific brain regions that may be involved in quantum-like thought processes.

The study is "the first to support the idea of quantum cognition at the neural level," Zhang said.

Cool now what does that really mean?

Quantum mechanics describes the behavior of the tiny particles that make up all matter in the universe, namely atoms and their subatomic components. One central tenet of the theory suggests a great deal of uncertainty in this world of the very small, something not seen at larger scales. For instance, in the big world, one can know where a train is on its route and how fast it's traveling, and given this data, one could predict when that train should arrive at the next station.

Now, swap out the train for an electron, and your predictive power disappears you can't know the exact location and momentum of a given electron, but you could calculate the probability that the particle may appear in a certain spot, traveling at a particular rate. In this way, you could gain a hazy idea of what the electron might be up to.

Just as uncertainty pervades the subatomic world, it also seeps into our decision-making process, whether we're debating which new series to binge-watch or casting our vote in a presidential election. Here's where quantum mechanics comes in. Unlike classical theories of decision-making, the quantum world makes room for a certain degree of uncertainty.

Related: The Funniest Theories in Physics

Classical psychology theories rest on the idea that people make decisions in order to maximize "rewards" and minimize "punishments" in other words, to ensure their actions result in more positive outcomes than negative consequences. This logic, known as "reinforcement learning," falls in line with Pavlonian conditioning, wherein people learn to predict the consequences of their actions based on past experiences, according to a 2009 report in the Journal of Mathematical Psychology.

If truly constrained by this framework, humans would consistently weigh the objective values of two options before choosing between them. But in reality, people don't always work that way; their subjective feelings about a situation undermine their ability to make objective decisions.

Consider an example:

Imagine you're placing bets on whether a tossed coin will land on heads or tails. Heads gets you $200, tails costs you $100, and you can choose to toss the coin twice. When placed in this scenario, most people choose to take the bet twice regardless of whether the initial throw results in a win or a loss, according to a study published in 1992 in the journal Cognitive Psychology. Presumably, winners bet a second time because they stand to gain money no matter what, while losers bet in attempt to recover their losses, and then some. However, if players aren't allowed to know the result of the first coin flip, they rarely make the second gamble.

When known, the first flip does not sway the choice that follows, but when unknown, it makes all the difference. This paradox does not fit within the framework of classical reinforcement learning, which predicts that the objective choice should always be the same. In contrast, quantum mechanics takes uncertainty into account and actually predicts this odd outcome.

"One could say that the 'quantum-based' model of decision-making refers essentially to the use of quantum probability in the area of cognition," Emmanuel Haven and Andrei Khrennikov, co-authors of the textbook "Quantum Social Science" (Cambridge University Press, 2013), told Live Science in an email.

Related: The 18 Biggest Unsolved Mysteries in Physics

Just as a particular electron might be here or there at a given moment, quantum mechanics assumes that the first coin toss resulted in both a win and a loss, simultaneously. (In other words, in the famous thought experiment, Schrdinger's cat is both alive and dead.) While caught in this ambiguous state, known as "superposition," an individual's final choice is unknown and unpredictable. Quantum mechanics also acknowledges that people's beliefs about the outcome of a given decision whether it will be good or bad often reflect what their final choice ends up being. In this way, people's beliefs interact, or become "entangled," with their eventual action.

Subatomic particles can likewise become entangled and influence each other's behavior even when separated by great distances. For instance, measuring the behavior of a particle located in Japan would alter the behavior of its entangled partner in the United States. In psychology, a similar analogy can be drawn between beliefs and behaviors. "It is precisely this interaction," or state of entanglement, "which influences the measurement outcome," Haven and Khrennikov said. The measurement outcome, in this case, refers to the final choice an individual makes. "This can be precisely formulated with the aid of quantum probability."

Scientists can mathematically model this entangled state of superposition in which two particles affect each other even if theyre separated by a large distance as demonstrated in a 2007 report published by the Association for the Advancement of Artificial Intelligence. And remarkably, the final formula accurately predicts the paradoxical outcome of the coin toss paradigm. "The lapse in logic can be better explained by using the quantum-based approach," Haven and Khrennikov noted.

In their new study, Zhang and his colleagues pitted two quantum-based models of decision-making against 12 classical psychology models to see which best predicted human behavior during a psychological task. The experiment, known as the Iowa Gambling Task, is designed to evaluate people's ability to learn from mistakes and adjust their decision-making strategy over time.

In the task, participants draw from four decks of cards. Each card either earns the player money or costs them money, and the object of the game is to earn as much money as possible. The catch lies in how each deck of cards is stacked. Drawing from one deck may earn a player large sums of money in the short term, but it will cost them far more cash by the end of the game. Other decks deliver smaller sums of money in the short-term, but fewer penalties overall. Through game play, winners learn to mostly draw from the "slow and steady" decks, while losers draw from the decks that earn them quick cash and steep penalties.

Historically, those with drug addictions or brain damage perform worse on the Iowa Gambling Task than healthy participants, which suggests that their condition somehow impairs decision-making abilities, as highlighted in a study published in 2014 in the journal Applied Neuropsychology: Child. This pattern held true in Zhang's experiment, which included about 60 healthy participants and 40 who were addicted to nicotine.

The two quantum models made similar predictions to the most accurate among the classical models, the authors noted. "Although the [quantum] models did not overwhelmingly outperform the [classical] ... one should be aware that the [quantum reinforcement learning] framework is still in its infancy and undoubtedly deserves additional studies," they added.

Related: 10 Things You Didn't Know About the Brain.

To bolster the value of their study, the team took brain scans of each participant as they completed the Iowa Gambling Task. In doing so, the authors attempted to peek at what was happening inside the brain as participants learned and adjusted their game-play strategy over time. Outputs generated by the quantum model predicted how this learning process would unfold, and thus, the authors theorized that hotspots of brain activity might somehow correlate with the models' predictions.

The scans did reveal a number of active brain areas in the healthy participants during game play, including activation of several large folds within the frontal lobe known to be involved in decision-making. In the smoking group, however, no hotspots of brain activity seemed tied to predictions made by the quantum model. As the model reflects participants' ability to learn from mistakes, the results may illustrate decision-making impairments in the smoking group, the authors noted.

However, "further research is warranted" to determine what these brain activity differences truly reflect in smokers and non-smokers, they added. "The coupling of the quantum-like models with neurophysiological processes in the brain ... is a very complex problem," Haven and Khrennikov said. "This study is of great importance as the first step towards its solution."

Models of classical reinforcement learning have shown "great success" in studies of emotion, psychiatric disorders, social behavior, free will and many other cognitive functions, Zhang said. "We hope that quantum reinforcement learning will also shed light on [these fields], providing unique insights."

In time, perhaps quantum mechanics will help explain pervasive flaws in human logic, as well as how that fallibility manifests at the level of individual neurons.

Originally published on Live Science.

View original post here:

What is quantum cognition? Physics theory could predict human behavior. - Livescience.com

A Tiny Glass Bead Goes as Still as Nature Allows – WIRED

Inside a small metal box on a laboratory table in Vienna, physicist Markus Aspelmeyer and his team have engineered, perhaps, the quietest place on earth.

The area in question is a microscopic spot in the middle of the box. Here, levitating in midairexcept there is no air because the box is in a vacuumis a tiny glass bead a thousand times smaller than a grain of sand. Aspelmeyers apparatus uses lasers to render this bead literally motionless. It is as still as it could possibly be, as permitted by the laws of physics: Its reached what physicists call the beads motional ground state. The ground state is the limit where you cannot extract any more energy from an object, says Aspelmeyer, who works at the University of Vienna. They can maintain the beads motionlessness for hours at a time.

This stillness is different from anything youve ever perceivedoverlooking that lake in the mountains, sitting in a sound-proofed studio, or even just staring at your laptop as it rests on the table. As calm as that table seems, if you could zoom in on it, you would see its surface being attacked by air molecules that circulate via your ventilation system, says Aspelmeyer. Look hard enough and youll see microscopic particles or tiny pieces of lint rolling around. In our day-to-day lives, stillness is an illusion. Were simply too large to notice the chaos.

Kahan Dare and Manuel Reisenbauer, physicists at the University of Vienna, adjust the apparatus where the levitated nanoparticle sits.

But this bead is truly still, regardless of whether you are judging it as a human or a dust mite. And at this level of stillness, our conventional wisdom about motion breaks down, as the bizarre rules of quantum mechanics kick in. For one thing, the bead becomes delocalized, says Aspelmeyer. The bead spreads out. It no longer has a definite positionlike a ripple in a pond, which stretches over an expanse of water rather than being at a particular location. Instead of maintaining a sharp boundary between bead and vacuum, the beads outline becomes cloudy and diffuse.

Technically, although the bead is at the limit of its motionlessness, it still moves about a thousandth of its own diameter. Physicists have a cool name for it. Its called the vacuum energy of the system, says Aspelmeyer. Put another way, nature does not allow any object to have completely zero motion. There must always be some quantum jiggle.

The beads stillness comes with another caveat: Aspelmeyers team has only forced the bead into its motional ground state along one dimension, not all three. But even achieving this level of stillness took them 10 years. One major challenge was simply getting the bead to stay levitated inside the laser beam, says physicist Uro Deli of the University of Vienna. Deli has worked on the experiment since its nascencefirst as an undergraduate student, then a PhD student, and now as a postdoc researcher.

Read more here:

A Tiny Glass Bead Goes as Still as Nature Allows - WIRED

New Centers Lead the Way towards a Quantum Future – Energy.gov

The world of quantum is the world of the very, very small. At sizes near those of atoms and smaller, the rules of physics start morphing into something unrecognizableat least to us in the regular world. While quantum physics seems bizarre, it offers huge opportunities.

Quantum physics may hold the key to vast technological improvements in computing, sensing, and communication. Quantum computing may be able to solve problems in minutes that would take lifetimes on todays computers. Quantum sensors could act as extremely high-powered antennas for the military. Quantum communication systems could be nearly unhackable. But we dont have the knowledge or capacity to take advantage of these benefitsyet.

The Department of Energy (DOE) recently announced that it will establish Quantum Information Science Centers to help lay the foundation for these technologies. As Congress put forth in the National Quantum Initiative Act, the DOEs Office of Science will make awards for at least two and up to five centers.

These centers will draw on both quantum physics and information theory to give us a soup-to-nuts understanding of quantum systems. Teams of researchers from universities, DOE national laboratories, and private companies will run them. Their expertise in quantum theory, technology development, and engineering will help each center undertake major, cross-cutting challenges. The centers work will range from discovery research up to developing prototypes. Theyll also address a number of different technical areas. Each center must tackle at least two of these subjects: quantum communication, quantum computing and emulation, quantum devices and sensors, materials and chemistry for quantum systems, and quantum foundries for synthesis, fabrication, and integration.

The impacts wont stop at the centers themselves. Each center will have a plan in place to transfer technologies to industry or other research partners. Theyll also work to leverage DOEs existing facilities and collaborate with non-DOE projects.

As the nations largest supporter of basic research in the physical sciences, the Office of Science is thrilled to head this initiative. Although quantum physics depends on the behavior of very small things, the Quantum Information Science Centers will be a very big deal.

The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://www.energy.gov/science.

Read the original:

New Centers Lead the Way towards a Quantum Future - Energy.gov

Scientists cooled a nanoparticle to the quantum limit – Science News

A tiny nanoparticle has been chilled tothe max.

Physicists cooled a nanoparticle to thelowest temperature allowed by quantum mechanics. The particles motion reachedwhats known as the ground state, or lowest possible energy level.

In a typical material, the amount thatits atoms jostle around indicates its temperature. But in the case of thenanoparticle, scientists can define an effective temperature based on themotion of the entire nanoparticle, which is made up of about 100 million atoms.That temperature reached twelve-millionths of a kelvin, scientists reportJanuary 30 in Science.

Levitating it with a laser inside of aspecially designed cavity, Markus Aspelmeyer of the University of Vienna andcolleagues reduced the nanoparticles motion to the ground state, a minimum level set by theHeisenberg uncertainty principle, which states that theres a limit to how wellyou can simultaneously know the position and momentum of an object.

While quantum mechanics is unmistakablein tiny atoms and electrons, its effects are harder to observe on larger scales.To better understand the theory, physicists have previously isolated its effects in other solid objects, such as vibrating membranes or beams (SN: 4/25/18). But nanoparticles have theadvantage that they can be levitated and precisely controlled with lasers.

Eventually, Aspelmeyer and colleaguesaim to use cooled nanoparticles to study how gravity behaves for quantumobjects, a poorly understood realm of physics. This is the really long-termdream, he says.

The rest is here:

Scientists cooled a nanoparticle to the quantum limit - Science News

Have We Solved the Black Hole Information Paradox? – Scientific American

Black holes, some of the most peculiar objects in the universe, pose a paradox for physicists. Two of our best theories give us two differentand seemingly contradictorypictures of how these objects work. Many scientists, including myself, have been trying to reconcile these visions, not just to understand black holes themselves, but also to answer deeper questions, such as What is spacetime? While I and other researchers made some partial progress over the years, the problem persisted. In the past year or so, however, I have developed a framework that I believe elegantly addresses the problem and gives us a glimpse of the mystery of how spacetime emerges at the most fundamental level.

Here is the problem: From the perspective of general relativity, black holes arise if the density of matter becomes too large and gravity collapses the material all the way toward its central point. When this happens, gravity is so strong in this region that nothingeven lightcan escape. The inside of the black hole, therefore, cannot be seen from the outside, even in principle, and the boundary, called the event horizon, acts as a one-way membrane: nothing can go from the interior to the exterior, but there is no problem in falling through it from the exterior to the interior.

But when we consider the effect of quantum mechanics, the theory governing elementary particles, we get another picture. In 1974, Stephen Hawking presented a calculation that made him famous. He discovered that, if we include quantum mechanical effects, a black hole in fact radiates, although very slowly. As a result, it gradually loses its mass and eventually evaporates. This conclusion has been checked by multiple methods now, and its basic validity is beyond doubt. The odd thing, however, is that in Hawkings calculation, the radiation emitted from a black hole does not depend on how the object was created. This means that two black holes created from different initial states can end up with the identical final radiation.

Is this a problem? Yes, it is. Modern physics is built on the assumption that if we have perfect knowledge about a system, then we can predict its future and infer its past by solving the equation of motion. Hawkings result would mean that this basic tenet is incorrect. Many of us thought that this problem was solved in 1997 when Juan Maldacena discovered a new way to view the situation, which seemed to prove no information was lost.

Case closed? Not quite. In 2012, Ahmed Almheiri and collaborators at the University of California, Santa Barbara, presented in their influential paper a strong argument that if the information is preserved in the Hawking emission process, then it is inconsistent with the smoothness of the horizonthe notion that an object can pass through the event horizon without being affected. Given that the option of information loss is out of the question, they argued that the black hole horizon is in fact not a one-way membrane but something like an unbreakable wall, which they called a firewall.

This confused theorists tremendously. As much as they disliked information loss, they abhorred firewalls too. Among other things, the firewall idea implies that Einsteins general relativity is completely wrong, at least at the horizon of a black hole. In fact, this is utterly counterintuitive. For a large black hole, gravity at the horizon is actually very weak because it lies far away from the central point, where all the matter is located. A region near the horizon thus looks pretty much like empty space, and yet the firewall argument says that space must abruptly end at the location of the horizon.

The main thrust of my new work is to realize that there are multiple layers of descriptions of a black hole, and the preservation of information and the smoothness of the horizon refer to theories at different layers. At one level, we can describe a black hole as viewed from a distance: the black hole is formed by collapse of matter, which eventually evaporates leaving the quanta of Hawking radiation in space. From this perspective, Maldacenas insight holds and there is no information loss in the process. That is because, in this picture, an object falling toward the black hole never enters the horizon, not because of a firewall but because of time delay between the clock of the falling object and that of a distant observer. The object seems to be slowly absorbed into the horizon, and its information is later sent back to space in the form of subtle correlations between particles of Hawking radiation.

On the other hand, the picture of the black hole interior emerges when looking at the system from the perspective of someone falling into it. Here we must ignore the fine details of the system that an infalling observer could not see because he or she has only finite time until they hit the singular point at the center of the black hole. This limits the amount of information they can access, even in principle. The world the infalling observer perceives, therefore, is the coarse-grained one. And in this picture, information need not be preserved because we already threw away some information even to arrive at this perspective. This is the way the existence of interior spacetime can be compatible with the preservation of information: they are the properties of the descriptions of nature at different levels!

To understand this concept better, the following analogy might help. Imagine water in a tank and consider a theory describing waves on the surface. At a fundamental level, water consists of a bunch of water molecules, which move, vibrate and collide with each other. With perfect knowledge of their properties, we can describe them deterministically without information loss. This description would be complete, and there would be no need to even introduce the concept of waves. On the other hand, we could focus on the waves by overlooking molecular level details and describing the water as a liquid. The atomic-level information, however, is not preserved in this description. For example, a wave can simply disappear, although the truth is that the coherent motion of water molecules that created the wave was transformed into a more random motion of each molecule without anything disappearing.

This framework tells us that the picture of spacetime offered by general relativity is not as fundamental as we might have thoughtit is merely a picture that emerges at a higher level in the hierarchical descriptions of nature, at least concerning the interior of a black hole. Similar ideas have been discussed earlier in varying forms, but the new framework allows us to explicitly identify the relevant microscopic degrees of freedomin other words, nature's fundamental building blocksparticipating in the emergence of spacetime, which surprisingly involves elements that we normally think to be located far away from the region of interest.

This new way of thinking about the paradox can also be applied to a recent setup devised by Geoff Penington, Stephen H. Shenker, Douglas Stanford and Zhenbin Yang in which Maldacenas scenario is applied more rigorously but in simplified systems. This allows us to identify which features of a realistic black hole are or are not captured by such analyses.

Beginning with the era of Descartes and Galilei, revolutions in physics have often been associated with new understandings of the concept of spacetime, and it seems that we are now in the middle of another such revolution. I strongly suspect that we may soon witness the emergence of a new understanding of nature at a qualitatively different and deeper level.

Read more:

Have We Solved the Black Hole Information Paradox? - Scientific American

What Is Quantum Computing and How Does it Work? – Built In

Accustomed to imagining worst-case scenarios, many cryptography experts are more concerned than usual these days: one of the most widely used schemes for safely transmitting data is poised to become obsolete once quantum computing reaches a sufficiently advanced state.

The cryptosystem known as RSA provides the safety structure for a host of privacy and communication protocols, from email to internet retail transactions. Current standards rely on the fact that no one has the computing power to test every possible way to de-scramble your data once encrypted, but a mature quantum computer could try every option within a matter of hours.

It should be stressed that quantum computers havent yet hit that level of maturity and wont for some time but when a large, stable device is built (or if its built, asan increasingly diminishing minority argue), its unprecedented ability to factor large numbers would essentially leave the RSA cryptosystem in tatters. Thankfully, the technology is still a ways away and the experts are on it.

Dont panic. Thats what Mike Brown, CTO and co-founder of quantum-focused cryptography company ISARA Corporation, advises anxious prospective clients. The threat is far from imminent. What we hear from the academic community and from companies like IBM and Microsoft is that a 2026-to-2030 timeframe is what we typically use from a planning perspective in terms of getting systems ready, he said.

Cryptographers from ISARA are among several contingents currently taking part in the Post-Quantum Cryptography Standardization project, a contest of quantum-resistant encryption schemes. The aim is to standardize algorithms that can resist attacks levied by large-scale quantum computers. The competition was launched in 2016 by the National Institute of Standards and Technology (NIST), a federal agency that helps establish tech and science guidelines, and is now gearing up for its third round.

Indeed, the level of complexity and stability required of a quantum computer to launch the much-discussed RSA attack is very extreme, according to John Donohue, scientific outreach manager at the University of Waterloos Institute for Quantum Computing. Even granting that timelines in quantum computing particularly in terms of scalability are points of contention, the community is pretty comfortable saying thats not something thats going to happen in the next five to 10 years, he said.

When Google announced that it had achieved quantum supremacy or that it used a quantum computer to run, in minutes, an operation that would take thousands of years to complete on a classical supercomputer that machine operated on 54 qubits, the computational bedrocks of quantum computing. While IBMs Q 53 system operates at a similar level, many current prototypes operate on as few as 20 or even five qubits.

But how many qubits would be needed to crack RSA? Probably on the scale of millions of error-tolerant qubits, Donohue told Built In.

Scott Aaronson, a computer scientist at the University of Texas at Austin, underscored the same last year in his popular blog after presidential candidate Andrew Yang tweeted that no code is uncrackable in the wake of Googles proof-of-concept milestone.

Thats the good news. The bad news is that, while cryptography experts gain more time to keep our data secure from quantum computers, the technologys numerous potential upsides ranging from drug discovery to materials science to financial modeling is also largely forestalled. And that question of error tolerance continues to stand as quantum computings central, Herculean challenge. But before we wrestle with that, lets get a better elemental sense of the technology.

Quantum computers process information in a fundamentally different way than classical computers. Traditional computers operate on binary bits information processed in the form of ones or zeroes. But quantum computers transmit information via quantum bits, or qubits, which can exist either as one or zero or both simultaneously. Thats a simplification, and well explore some nuances below, but that capacity known as superposition lies at the heart of quantums potential for exponentially greater computational power.

Such fundamental complexity both cries out for and resists succinct laymanization. When the New York Times asked 10 experts to explain quantum computing in the length of a tweet, some responses raised more questions than they answered:

Microsoft researcher David Reilly:

A quantum machine is a kind of analog calculator that computes by encoding information in the ephemeral waves that comprise light and matter at the nanoscale.

D-Wave Systems executive vice president Alan Baratz:

If were honest, everything we currently know about quantum mechanics cant fully describe how a quantum computer works.

Quantum computing also cries out for a digestible metaphor. Quantum physicist Shohini Ghose, of Wilfrid Laurier University, has likened the difference between quantum and classical computing to light bulbs and candles: The light bulb isnt just a better candle; its something completely different.

Rebecca Krauthamer, CEO of quantum computing consultancy Quantum Thought, compares quantum computing to a crossroads that allows a traveler to take both paths. If youre trying to solve a maze, youd come to your first gate, and you can go either right or left, she said. We have to choose one, but a quantum computer doesnt have to choose one. It can go right and left at the same time.

It can, in a sense, look at these different options simultaneously and then instantly find the most optimal path, she said. That's really powerful.

The most commonly used example of quantum superposition is Schrdingers cat:

Despite its ubiquity, many in the QC field arent so taken with Schrodingers cat. The more interesting fact about superposition rather than the two-things-at-once point of focus is the ability to look at quantum states in multiple ways, and ask it different questions, said Donohue. That is, rather than having to perform tasks sequentially, like a traditional computer, quantum computers can run vast numbers of parallel computations.

Part of Donohues professional charge is clarifying quantums nuances, so its worth quoting him here at length:

In superposition I can have state A and state B. I can ask my quantum state, are you A or B? And it will tell me, I'm a or I'm B. But I might have a superposition of A + B in which case, when I ask it, Are you A or B? Itll tell me A or B randomly.

But the key of superposition is that I can also ask the question, Are you in the superposition state of A + B? And then in that case, they'll tell me, Yes, I am the superposition state A + B.

But theres always going to be an opposite superposition. So if its A + B, the opposite superposition is A - B.

Thats about as simplified as we can get before trotting out equations. But the top-line takeaway is that that superposition is what lets a quantum computer try all paths at once.

Thats not to say that such unprecedented computational heft will displace or render moot classical computers. One thing that we can really agree on in the community is that it wont solve every type of problem that we run into, said Krauthamer.

But quantum computing is particularly well suited for certain kinds of challenges. Those include probability problems, optimization (what is, say, the best possible travel route?) and the incredible challenge of molecular simulation for use cases like drug development and materials discovery.

The cocktail of hype and complexity has a way of fuzzing outsiders conception of quantum computing which makes this point worth underlining: quantum computers exist, and they are being used right now.

They are not, however, presently solving climate change, turbocharging financial forecasting probabilities or performing other similarly lofty tasks that get bandied about in reference to quantum computings potential. QC may have commercial applications related to those challenges, which well explore further below, but thats well down the road.

Today, were still in whats known as the NISQ era Noisy, Intermediate-Scale Quantum. In a nutshell, quantum noise makes such computers incredibly difficult to stabilize. As such, NISQ computers cant be trusted to make decisions of major commercial consequence, which means theyre currently used primarily for research and education.

The technology just isnt quite there yet to provide a computational advantage over what could be done with other methods of computation at the moment, said Dohonue. Most [commercial] interest is from a long-term perspective. [Companies] are getting used to the technology so that when it does catch up and that timeline is a subject of fierce debate theyre ready for it.

Also, its fun to sit next to the cool kids. Lets be frank. Its good PR for them, too, said Donohue.

But NISQ computers R&D practicality is demonstrable, if decidedly small-scale. Donohue cites the molecular modeling of lithium hydrogen. Thats a small enough molecule that it can also be simulated using a supercomputer, but the quantum simulation provides an important opportunity to check our answers after a classical-computer simulation. NISQs have also delivered some results for problems in high-energy particle physics, Donohue noted.

One breakthrough came in 2017, when researchers at IBM modeled beryllium hydride, the largest molecule simulated on a quantum computer to date. Another key step arrived in 2019, when IonQ researchers used quantum computing to go bigger still, by simulating a water molecule.

These are generally still small problems that can be checked using classical simulation methods. But its building toward things that will be difficult to check without actually building a large particle physics experiment, which can get very expensive, Donohue said.

And curious minds can get their hands dirty right now. Users can operate small-scale quantum processors via the cloud through IBMs online Q Experience and its open-source software Quiskit. Late last year, Microsoft and Amazon both announced similar platforms, dubbed Azure Quantum and Braket. Thats one of the cool things about quantum computing today, said Krauthamer. We can all get on and play with it.

RelatedQuantum Computing and the Gaming Industry

Quantum computing may still be in its fussy, uncooperative stage, but that hasnt stopped commercial interests from diving in.

IBM announced at the recent Consumer Electronics Show that its so-called Q Network had expanded to more than 100 companies and organizations. Partners now range from Delta Air Lines to Anthem health to Daimler AG, which owns Mercedes-Benz.

Some of those partnerships hinge on quantum computings aforementioned promise in terms of molecular simulation. Daimler, for instance, is hoping the technology will one day yield a way to produce better batteries for electric vehicles.

Elsewhere, partnerships between quantum computing startups and leading companies in the pharmaceutical industry like those established between 1QBit and Biogen, and ProteinQure and AstraZeneca point to quantum molecular modelings drug-discovery promise, distant though it remains. (Today, drug development is done through expensive, relatively low-yield trial-and-error.)

Researchers would need millions of qubits to compute the chemical properties of a novel substance, noted theoretical physicist Sabine Hossenfelder in the Guardian last year. But the conceptual underpinning, at least, is there. A quantum computer knows quantum mechanics already, so I can essentially program in how another quantum system would work and use that to echo the other one, explained Donohue.

Theres also hope that large-scale quantum computers will help accelerate AI, and vice versa although experts disagree on this point. The reason theres controversy is, things have to be redesigned in a quantum world, said Krauthamer, who considers herself an AI-quantum optimist. We cant just translate algorithms from regular computers to quantum computers because the rules are completely different, at the most elemental level.

Some believe quantum computers can help combat climate change by improving carbon capture. Jeremy OBrien, CEO of Palo Alto-based PsiQuantum, wrote last year that quantum simulation of larger molecules if achieved could help build a catalyst for scrubbing carbon dioxide directly from the atmosphere.

Long-term applications tend to dominate headlines, but they also lead us back to quantum computings defining hurdle and the reason coverage remains littered with terms like potential and promise: error correction.

Qubits, it turns out, are higher maintenance than even the most meltdown-prone rock star. Any number of simple actions or variables can send error-prone qubits falling into decoherence, or the loss of a quantum state (mainly that all-important superposition). Things that can cause a quantum computer to crash include measuring qubits and running operations in other words: using it. Even small vibrations and temperature shifts will cause qubits to decohere, too.

Thats why quantum computers are kept isolated, and the ones that run on superconducting circuits the most prominent method, favored by Google and IBM have to be kept at near-absolute zero (a cool -460 degrees Fahrenheit).

Thechallenge is two-fold, according to Jonathan Carter, a scientist at Berkeley Quantum. First, individual physical qubits need to have better fidelity. That would conceivably happen either through better engineering, discovering optimal circuit layout, and finding the optimal combination of components. Second, we have to arrange them to form logical qubits.

Estimates range from hundreds to thousands to tens of thousands of physical qubits required to form one fault-tolerant qubit. I think its safe to say that none of the technology we have at the moment could scale out to those levels, Carter said.

From there, researchers would also have to build ever-more complex systems to handle the increase in qubit fidelity and numbers. So how long will it take until hardware-makers actually achieve the necessary error correction to make quantum computers commercially viable?

Some of these other barriers make it hard to say yes to a five- or 10-year timeline, Carter said.

Donohue invokes and rejects the same figure. Even the optimist wouldnt say its going to happen in the next five to 10 years, he said. At the same time, some small optimization problems, specifically in terms of random number generation could happen very soon.

Weve already seen some useful things in that regard, he said.

For people like Michael Biercuk, founder of quantum-engineering software company Q-CTRL, the only technical commercial milestone that matters now is quantum advantage or, as he uses the term, when a quantum computer provides some time or cost advantage over a classical computer. Count him among the optimists: he foresees a five-to-eight year time scale to achieve such a goal.

Another open question: Which method of quantum computing will become standard? While superconducting has borne the most fruit so far, researchers are exploring alternative methods that involve trapped ions, quantum annealing or so-called topological qubits. In Donohues view, its not necessarily a question of which technology is better so much as one of finding the best approach for different applications. For instance, superconducting chips naturally dovetail with the magnetic field technology that underpins neuroimaging.

The challenges that quantum computing faces, however, arent strictly hardware-related. The magic of quantum computing resides in algorithmic advances, not speed, Greg Kuperberg, a mathematician at the University of California at Davis, is quick to underscore.

If you come up with a new algorithm, for a question that it fits, things can be exponentially faster, he said, using exponential literally, not metaphorically. (There are currently 63 algorithms listed and 420 papers cited at Quantum Algorithm Zoo, an online catalog of quantum algorithms compiled by Microsoft quantum researcher Scott Jordan.)

Another roadblock, according to Krauthamer, is general lack of expertise. Theres just not enough people working at the software level or at the algorithmic level in the field, she said. Tech entrepreneur Jack Hidaritys team set out to count the number of people working in quantum computing and found only about 800 to 850 people, according to Krauthamer. Thats a bigger problem to focus on, even more than the hardware, she said. Because the people will bring that innovation.

While the community underscores the importance of outreach, the term quantum supremacy has itself come under fire. In our view, supremacy has overtones of violence, neocolonialism and racism through its association with white supremacy, 13 researchers wrote in Nature late last year. The letter has kickstarted an ongoing conversation among researchers and academics.

But the fields attempt to attract and expand also comes at a time of uncertainty in terms of broader information-sharing.

Quantum computing research is sometimes framed in the same adversarial terms as conversations about trade and other emerging tech that is, U.S. versus China. An oft-cited statistic from patent analytics consultancy Patinformatics states that, in 2018, China filed 492 patents related to quantum technology, compared to just 248 in the United States. That same year, the think tank Center for a New American Security published a paper that warned, China is positioning itself as a powerhouse in quantum science. By the end of 2018, the U.S. passed and signed into law the National Quantum Initiative Act. Many in the field believe legislators were compelled due to Chinas perceived growing advantage.

The initiative has spurred domestic research the Department of Energy recently announced up to $625 million in funding to establish up to five quantum information research centers but the geopolitical tensions give some in the quantum computing community pause, namely for fear of collaboration-chilling regulation. As quantum technology has become prominent in the media, among other places, there has been a desire suddenly among governments to clamp down, said Biercuk, who has warned of poorly crafted and nationalistic export controls in the past.

What they dont understand often is that quantum technology and quantum information in particular really are deep research activities where open transfer of scientific knowledge is essential, he added.

The National Science Foundation one of the government departments given additional funding and directives under the act generally has a positive track record in terms of avoiding draconian security controls, Kuperberg said. Even still, the antagonistic framing tends to obscure the on-the-ground facts. The truth behind the scenes is that, yes, China would like to be doing good research and quantum computing, but a lot of what theyre doing is just scrambling for any kind of output, he said.

Indeed, the majority of the aforementioned Chinese patents are quantum tech, but not quantum computing tech which is where the real promise lies.

The Department of Energy has an internal list of sensitive technologies that it could potentially restrict DOE researchers from sharing with counterparts in China, Russia, Iran and North Korea. It has not yet implemented that curtailment, however, DOE Office of Science director Chris Fall told the House committee on science, space and technology and clarified to Science, in January.

Along with such multi-agency-focused government spending, theres been a tsunami of venture capital directed toward commercial quantum-computing interests in recent years. A Nature analysis found that, in 2017 and 2018, private funding in the industry hit at least $450 million.

Still, funding concerns linger in some corners. Even as Googles quantum supremacy proof of concept has helped heighten excitement among enterprise investors, Biercuk has also flagged the beginnings of a contraction in investment in the sector.

Even as exceptional cases dominate headlines he points to PsiQuantums recent $230 million venture windfall there are lesser-reported signs of struggle. I know of probably four or five smaller shops that started and closed within about 24 months; others were absorbed by larger organizations because they struggled to raise, he said.

At the same time, signs of at least moderate investor agitation and internal turmoil have emerged. The Wall Street Journal reported in January that much-buzzed quantum computing startup Rigetti Computing saw its CTO and COO, among other staff, depart amid concerns that the companys tech wouldnt be commercially viable in a reasonable time frame.

Investor expectations had become inflated in some instances, according to experts. Some very good teams have faced more investor skepticism than I think has been justified This is not six months to mobile application development, Biercuk said.

In Kuperbergs view, part of the problem is that venture capital and quantum computing operate on completely different timelines. Putting venture capital into this in the hope that some profitable thing would arise quickly, that doesnt seem very natural to me in the first place, he said, adding the caveat that he considers the majority of QC money prestige investment rather than strictly ROI-focused.

But some startups themselves may have had some hand in driving financiers over-optimism. I wont name names, but there definitely were some people giving investors outsize expectations, especially when people started coming up with some pieces of hardware, saying that advantages were right around the corner, said Donohe. That very much rubbed the academic community the wrong way.

Scott Aaronson recently called out two prominent startups for what he described as a sort of calculated equivocation. He wrote of a pattern in which a party will speak of a quantum algorithms promise, without asking whether there are any indications that your approach will ever be able to exploit interference of amplitudes to outperform the best classical algorithm.

And, mea culpa, some blame for the hype surely lies with tech media. Trying to crack an area for a lay audience means you inevitably sacrifice some scientific precision, said Biercuk. (Thanks for understanding.)

Its all led to a willingness to serve up a glass of cold water now and again. As Juani Bermejo-Vega, a physicist and researcher at University of Granada in Spain, recently told Wired, the machine on which Google ran its milestone proof of concept is mostly still a useless quantum computer for practical purposes.

Bermejo-Vegas quote came in a story about the emergence of a Twitter account called Quantum Bullshit Detector, which decrees, @artdecider-like, a bullshit or not bullshit quote tweet of various quantum claims. The fact that leading quantum researchers are among the accounts 9,000-plus base of followers would seem to indicate that some weariness exists among the ranks.

But even with the various challenges, cautious optimism seems to characterize much of the industry. For good and ill, Im vocal about maintaining scientific and technical integrity while also being a true optimist about the field and sharing the excitement that I have and to excite others about whats coming, Biercuk said.

This year could prove to be formative in the quest to use quantum computers to solve real-world problems, said Krauthamer. Whenever I talk to people about quantum computing, without fail, they come away really excited. Even the biggest skeptics who say, Oh no, theyre not real. Its not going to happen for a long time.

Related20 Quantum Computing Companies to Know

Read the original post:

What Is Quantum Computing and How Does it Work? - Built In

Why physicists are determined to prove Galileo and Einstein wrong – Livescience.com

In the 17th century, famed astronomer and physicist Galileo Galilei is said to have climbed to the top of the Tower of Pisa and dropped two different-sized cannonballs. He was trying to demonstrate his theory which Albert Einstein later updated and added to his theory of relativity that objects fall at the same rate regardless of their size.

Now, after spending two years dropping two objects of different mass into a free fall in a satellite, a group of scientists has concluded that Galileo and Einstein were right: The objects fell at a rate that was within two-trillionths of a percent of each other, according to a new study.

This effect has been confirmed time and time again, as has Einstein's theory of relativity yet scientists still aren't convinced that there isn't some kind of exception somewhere. "Scientists have always had a difficult time actually accepting that nature should behave that way," said senior author Peter Wolf, research director at the French National Center for Scientific Research's Paris Observatory.

Related: 8 Ways You Can See Einstein's Theory of Relativity in Real Life

That's because there are still inconsistencies in scientists' understanding of the universe.

"Quantum mechanics and general relativity, which are the two basic theories all of physics is built on today ...are still not unified," Wolf told Live Science. What's more, although scientific theory says the universe is made up mostly of dark matter and dark energy, experiments have failed to detect these mysterious substances.

"So, if we live in a world where there's dark matter around that we can't see, that might have an influence on the motion of [objects]," Wolf said. That influence would be "a very tiny one," but it would be there nonetheless. So, if scientists see test objects fall at different rates, that "might be an indication that we're actually looking at the effect of dark matter," he added.

Wolf and an international group of researchers including scientists from France's National Center for Space Studies and the European Space Agency set out to test Einstein and Galileo's foundational idea that no matter where you do an experiment, no matter how you orient it and what velocity you're moving at through space, the objects will fall at the same rate.

The researchers put two cylindrical objects one made of titanium and the other platinum inside each other and loaded them onto a satellite. The orbiting satellite was naturally "falling" because there were no forces acting on it, Wolf said. They suspended the cylinders within an electromagnetic field and dropped the objects for 100 to 200 hours at a time.

From the forces the researchers needed to apply to keep the cylinders in place inside the satellite, the team deduced how the cylinders fell and the rate at which they fell, Wolf said.

And, sure enough, the team found that the two objects fell at almost exactly the same rate, within two-trillionths of a percent of each other. That suggested Galileo was correct. What's more, they dropped the objects at different times during the two-year experiment and got the same result, suggesting Einstein's theory of relativity was also correct.

Their test was an order of magnitude more sensitive than previous tests. Even so, the researchers have published only 10% of the data from the experiment, and they hope to do further analysis of the rest.

Not satisfied with this mind-boggling level of precision, scientists have put together several new proposals to do similar experiments with two orders of magnitude greater sensitivity, Wolf said. Also, some physicists want to conduct similar experiments at the tiniest scale, with individual atoms of different types, such as rubidium and potassium, he added.

The findings were published Dec. 2 in the journal Physical Review Letters.

Originally published on Live Science.

Read more:

Why physicists are determined to prove Galileo and Einstein wrong - Livescience.com

Fibre on steroids Wits student uses quantum physics for massive improvements – MyBroadband

A research team at Wits University has discovered a way to improve data transmission across fibre networks.

The team comprising of a PhD student at the university, as well as several colleagues from Wits and Huazh University of Science and Technology in Wuhan, China.

This research uses quantum physics to improve data security across fibre networks without the need to replace legacy fibre infrastructure.

Our team showed that multiple patterns of light are accessible through conventional optical fibre that can only support a single pattern, said Wits PhD student, Isaac Nape.

We achieved this quantum trick by engineering the entanglement of two photons. We sent the polarised photon down the fibre line and accessed many other patterns with the other photon.

Entanglement refers to particles interacting in a way that the quantum state of each particle cannot be described without reference to the state of others even if the particles are separated by large distances.

In this scenario, the researchers manipulated the qualities of the photon on the inside of the fibre line by changing the qualities of its entangled counterpart in free space.

In essence, the research introduces the concept of communicating across legacy fibre networks with multi-dimensional entangled states, bringing together the benefits of existing quantum communication with polarised photons with that of high-dimension communication using patterns of light, said team leader Wits Professor Andrew Forbes.

Quantum entanglement has been explored extensively over the past few decades, with the most notable success story being increased communications security through Quantum Key Distribution (QKD).

This method uses qubits 2D quantum states to transfer a limited amount of information across fibre links by using polarisation as a degree of freedom.

Another degree of freedom is the spatial pattern of light, but while this has the benefit of high-dimensional encoding, it requires a custom fibre optical cable making it unsuitable to already existing networks.

Our team found a new way to balance these two extremes, by combining polarisation qubits with high-dimensional spatial modes to create multi-dimensional hybrid quantum states, said Nape.

The trick was to twist the one photon in polarisation and twist the other in pattern, forming spirally light that is entangled in two degrees of freedom, said Forbes.

Since the polarisation entangled photon has only one pattern, it could be sent down the long-distance single-mode fibre, while the twisted light photon could be measured without the fibre, accessing multi-dimensional twisted patterns in the free-space.

These twists carry orbital angular momentum (or spin), a promising candidate for encoding information.

Go here to see the original:

Fibre on steroids Wits student uses quantum physics for massive improvements - MyBroadband

Stephen Hawking thought black holes were ‘hairy’. New study suggests he was right. – Big Think

What's it like on the outer edges of a black hole?

This mysterious area, known as the event horizon, is commonly thought of as a point of no return, past which nothing can escape. According to Einstein's theory of general relativity, black holes have smooth, neatly defined event horizons. On the outer side, physical information might be able to escape the black hole's gravitational pull, but once it crosses the event horizon, it's consumed.

"This was scientists' understanding for a long time," Niayesh Afshordi, a physics and astronomy professor at the University of Waterloo, told Daily Galaxy. The American theoretical physicist John Wheeler summed it up by saying: "Black holes have no hair." But then, as Afshordi noted, Stephen Hawking "used quantum mechanics to predict that quantum particles will slowly leak out of black holes, which we now call Hawking radiation."

ESO, ESA/Hubble, M. Kornmesser

In the 1970s, Stephen Hawking famously proposed that black holes aren't truly "black." In simplified terms, the theoretical physicist reasoned that, due to quantum mechanics, black holes actually emit tiny amounts of black-body radiation, and therefore have a non-zero temperature. So, contrary to Einstein's view that black holes are neatly defined and are not surrounded by loose materials, Hawking radiation suggests that black holes are actually surrounded by quantum "fuzz" that consists of particles that escape the gravitational pull.

"If the quantum fuzz responsible for Hawking radiation does exist around black holes, gravitational waves could bounce off of it, which would create smaller gravitational wave signals following the main gravitational collision event, similar to repeating echoes," Afshordi said.

Credit: NASA's Goddard Space Flight Center/Jeremy Schnittman

A new study from Afshordi and co-author Jahed Abedi could provide evidence of these signals, called gravitational wave "echoes." Their analysis examined data collected by the LIGO and Virgo gravitational wave detectors, which in 2015 detected the first direct observation of gravitational waves from the collision of two distant neutron stars. The results, at least according to the researchers' interpretation, showed relatively small "echo" waves following the initial collision event.

"The time delay we expect (and observe) for our echoes ... can only be explained if some quantum structure sits just outside their event horizons," Afshordi told Live Science.

Afshordi et al.

Scientists have long studied black holes in an effort to better understand fundamental physical laws of the universe, especially since the introduction of Hawking radiation. The idea highlighted the extent to which general relativity and quantum mechanics conflict with each other.

Everywhere even in a vacuum, like an event horizon pairs of so-called "virtual particles" briefly pop in and out of existence. One particle in the pair has positive mass, the other negative. Hawking imagined a scenario in which a pair of particles emerged near the event horizon, and the positive particle had just enough energy to escape the black hole, while the negative one fell in.

Over time, this process would lead black holes to evaporate and vanish, given that the particle absorbed had a negative mass. It would also lead to some interesting paradoxes.

For example, quantum mechanics predicts that particles would be able to escape a black hole. This idea suggests that black holes eventually die, which would theoretically mean that the physical information within a black hole also dies. This violates a key idea in quantum mechanics which is that physical information can't be destroyed.

The exact nature of black holes remains a mystery. If confirmed, the recent discovery could help scientists better fuse these two models of the universe. Still, some researchers are skeptical of the recent findings.

"It is not the first claim of this nature coming from this group," Maximiliano Isi, an astrophysicist at MIT, told Live Science. "Unfortunately, other groups have been unable to reproduce their results, and not for lack of trying."

Isi noted that other papers examined the same data, but failed to find echoes. Afshordi told Galaxy Daily:

"Our results are still tentative because there is a very small chance that what we see is due to random noise in the detectors, but this chance becomes less likely as we find more examples. Now that scientists know what we're looking for, we can look for more examples, and have a much more robust confirmation of these signals. Such a confirmation would be the first direct probe of the quantum structure of space-time."

From Your Site Articles

Related Articles Around the Web

Original post:

Stephen Hawking thought black holes were 'hairy'. New study suggests he was right. - Big Think

Viewpoint: A New Spin on Thermometers for Extremely Low Temperatures – Physics

January 27, 2020• Physics 13, 7

The temperature of an ultracold gas of rubidium atoms is measured precisely using internal quantum states of a single cesium atom.

Temperature is one of the most widely measured physical quantities. As a notion, it is as old as civilization itself. Yet, scientifically, the meaning and conceptual generality of temperature only fully emerged after intense efforts were made to precisely measure it starting from the 18th century on [1]. That work culminated in the discovery of the absolute temperature scale and revealed the fundamental status temperature has in thermodynamics. Today, temperature measurements, or thermometry, are pushing this foundation to new extremes by probing smaller energies and smaller length scales, where quantum mechanics plays a dominant role. Advances in such measurements have forced a reassessment of basic thermodynamic quantities [2]. They also hold promise for stimulating novel technologies with so-called quantum-enhanced performance [3]. Now, in a new study, Artur Widera from the Technical University of Kaiserslautern, Germany, and colleagues accurately measure the temperature of an ultracold rubidium (Rb) gas using discrete quantized spin states of a cesium (Cs) atom immersed in the gas [4]. This demonstration of quantum-probe thermometry for an ultracold gas promises more accurate measurements of these hard to reach regimes.

Ideally, the temperature of a physical system is measurable without detailed knowledge of the systems inner workings. To achieve that goal, scientists use a probeanother system for which they thoroughly understand the temperature dependence of its physical properties. If the probe is put into contact with the system for a sufficient time, then an energy exchange will occur and cause the probe to equilibrate to the systems temperature. This equilibration allows inference of the systems temperature by the concomitant change in some calibrated property of the probe, such as the column height of a liquid in a capillary tube, the electrical resistance of a conducting element, or the refractive index of a medium.

The frontier of thermometry is thermometer miniaturization with the aim of measuring the difficult-to-access temperatures of very small and cold systems. This goal presents two challenges. First, the probe needs to be much smaller than the system being measured, ensuring that it thermalizes with minimal disturbance to the system. The ultimate minimum size for the probe is a single atom, where information about the systems temperature is mapped onto the atoms quantum state. Second, the characteristic energy scale of the probe needs to be controllable so that it can be tuned to the vicinity of the systems thermal energy, ensuring that the measurement is sensitive. Both these challenges are met by Widera and his co-workers in their new experiment [4].

The teams experimental system consisted of a trapped cloud of just under 10,000 Rb atoms. The atoms were cooled to between 200 and 1000 nK, a regime in which the gas behaves like a classical gas. The temperature of such a gas is accurately determinable by time-of-flight measurements by fitting the velocity distribution of atoms in the cloud imaged after the trap is switched off and the cloud is left to expand for some period of time. The system thus serves as a verifiable testbed for an ultracold thermometer.

For the probe, Widera and colleagues turned to the Cs atom, whose internal atomic structure is well characterized (Fig. 1). Cesium possesses seven accessible ground-state hyperfine energy levels for its outer electron, labeled by an angular momentum projection mCs={3,2,1,0,1,2,3}. Normally these levels all have identical energies. However, applying a weak magnetic field B to the atoms splits the levels into a ladder whose steps have a tuneable energy gap of E2. The Cs atoms thus behave like effective quantum-mechanical spins. Rubidium atoms also possess three accessible states, labeled mRb={1,0,1}, which turn out to have an energy gap of E in the same B field.

To use the Cs atoms to determine the Rb clouds temperature, the team exploited so-called spin-exchange collisions, where quanta of angular momentum are transferred between the Cs and Rb atoms. In one type of collision, known as an endoergic collision, the Rb is pushed into a higher-energy state and the Cs into a lower-energy state (Fig. 2). This process requires E2 of additional energy, which is provided by the motion of the Rb atoms. The occurrence of these collisions depends on the availability of kinetic energyand therefore the temperatureof the Rb cloud. The spread in the distribution of the Cs atoms spin-state populations induced by these collisions thus encodes information about the gas temperature.

The team measured the populations of a handful of Cs atoms after 3 s, by which time the system had reached a steady state. They observed that the steady-state fluctuations in the Cs atoms energies were linearly related to the temperature of the Rb cloud, as independently determined by time-of-flight measurements. The same relationship was found for different applied magnetic fields, different densities of the Rb cloud, and different initial states. This robust result thus convincingly demonstrates that thermometry can be performed using a single-atom quantum probe without the need for detailed model fitting. However, the relatively long time required to reach the steady state is not always accessible. To overcome this problem, Wideras team fitted the experimental data describing the evolution of the Cs probe populations before steady state to a specific microscopic rate model. In this way, they could extract the temperature of the system after just 350 ms of interaction. A theoretical analysis of this approach indicates that only three collisions are needed to obtain a temperature measurement. Furthermore, these measurements are nearly an order of magnitude more sensitive than those performed in the steady state.

This experiment is a fascinating demonstration of a rapid quantum-probe temperature measurement, where the information extracted is maximized and the perturbation to the system is minimized. Future work will undoubtedly exploit the quantumness of the probe beyond spin populations [5] and also utilize universal nonequilibrium properties to avoid the specific model fitting needed here for such measurements [6].

Quantum-probe thermometry has many advantages over conventional time-of-flight measurements, since it is nondestructive, minimally invasive, and spatially localized. The most immediate application is to cold-atom quantum simulations [7], notably strongly interacting fermionic atoms trapped in optical lattices. Such simulation experiments aim to investigate important model systems for which we lack a complete understanding of the physics. This deficiency of knowledge makes it notoriously difficult to directly measure their temperature. Consequently, quantum-probe thermometry will likely be a crucial ingredient for quantum simulations that aim to resolve longstanding questions about these model systems, such as whether they exhibit high-temperature superconductivity [8].

This research is published in Physical Review X.

Stephen R. Clark is a Senior Lecturer in Theoretical Physics at the University of Bristol. He completed his doctoral studies at the University of Oxford in 2007. He has subsequently held research fellowships at the Centre for Quantum Technologies in the National University of Singapore and at Keble College, Oxford, as well as a senior scientist post at the Clarendon Laboratory in the University of Oxford. Before joining Bristol in 2018, he was a Lecturer in physics at the University of Bath. His research focuses on the dynamical properties of driven strongly correlated many-body systems ranging from cold atoms to high-temperature superconductors.

Visit link:

Viewpoint: A New Spin on Thermometers for Extremely Low Temperatures - Physics

The Goop Lab’s ‘energy healing’ is no better than placebo, research proves – CNET

A perpetual scene in The Goop Lab, Paltrow and her Chief Content Officer Elise Loehnen sit and talk with people who are well-known in the alternative wellness world.

Imagine that you could get a full release of all your pent-up emotions and relief from all your physical aches and pains, courtesy of a 60-minute session with an energy healer who flaps his hands four to six feet above your body in the name of quantum physics.

This is what goes down in the fifth, and perhaps the most outrageous, episode of Gwyneth Paltrow's The Goop Lab on Netflix. The docuseries features alternative wellness trends often covered on Paltrow's goop.comand is available to stream now on Netflix.

Though designed to "entertain and inform" (as per the disclaimer), the chiropractor turned "somatic energy practitioner" in this episode certainly makes it sound like everyone should give up their primary care provider for an apparent force-field manipulator.

Is there any promise? Is it all quackery? We investigate, but you probably (hopefully) already know the answer.

Energy healer John Amaral waves his hands like magic wands over three Goop employees (and random guest star, dancer Julianne Hough) to whisk away their emotional traumas and physical aches.

Paltrow asks Amaral why he hasn't, until now, shown his practice on-screen. Amaral gives an, uh, interesting response: "It just looks wacky I've been hesitant to show it just because it can look strange. But I think it's time for the world to see." The world sees three Goop-ers and Hough all writhe, wiggle and whimper on the tables. It's as if they're actually being prodded and pulled, without ever being touched.

Hough screams and contorts her body into positions that only a professional dancer could accomplish, and Elise Loehnen, Goop's chief content officer, lets out long, monotone moans that left me mildly uncomfortable.

Only one of the Goop-ers -- Brian, a software architect and self-proclaimed skeptic -- remains relatively still throughout the group treatment. This, to me, strengthens the notion that energy healing is all placebo.

After the fact, Loehnen says the experience felt like an exorcism. Even Paltrow gives a subtle nod to the woo-woo effect of all this, asking Loehner, "Could you get any Goop-ier?"

I would love to know what gets Goop-ier than this.

Energy healing is a type of alternative wellness therapy that involves manipulating the flow of energy in and around your body. One popular form of energy healing, called reiki, aims to remove "blockages" of energy that have built up where physical and emotional pain have occurred.

For example, people who have chronic headaches might have an energy healer work on the supposed energy fields around their head and neck. A runner who's struggled with repetitive stress injuries in the past might have an energy healer focus on the ankles, knees and hips.

Energy healing is (or should be) performed by a trained practitioner. You lie on a table while the practitioner uses their hands to manipulate the energy fields around your body. The practitioner may not touch you at all or may lightly touch certain areas of your body, such as your neck, to feel and reroute energy.

According to Amaral, "If you just change the frequency of vibration of the body itself, it changes the way the cells regrow, it changes the way the sensory system processes." Amaral admits this is just a hypothesis, but the Goop-ers seem to take it as fact nonetheless.

A 2017 review of studies in the Journal of Alternative and Complementary Medicine states that it is currently impossible to conclude whether or not energy healing is effective for any conditions. The current body of research is too limited and much of it is too flawed. A Cochrane review looking specifically at the effects of reiki on anxiety and depression seconds that conclusion.

A 2019 paper in Global Advances in Health and Medicine, however, gives "energy medicine" some credit, saying that while this type of therapy cannot and should not be used singularly, it can offer an additional element of healing for some people and conditions.

The paper notes that "The healing of a patient must include more than the biology and chemistry of their physical body; by necessity, it must include the mental, emotional, and spiritual (energetic) aspects."

I suppose that since chiropractors were once (not too long ago) considered quacks, there is room for open-mindedness. But according to the International Center for Reiki Training, energy healing has been around for at least 100 years -- usually a treatment can be proven or debunked in less time than that, yet many questions still remain about energy healing.

It is worth noting that placebo effects aren't useless: Even Harvard University acknowledges placebos as effective feel-good tools, helping people overcome fatigue, chronic pain and stress.

For example, one study found that a sham version of reiki (performed by an unlicensed person) was just as effective as the real thing in helping chemotherapy patients feel more comfortable. This proved that energy healing was a placebo, but even so, it was helpful for these patients.

Still, placebos can't cure you.

During Reiki or energy healing, the practitioner does not touch you or only does so very briefly and lightly.

According to science writer Dana G. Smith, this episode "is everything that is wrong with Goop," and it looks like other experts agree with her.

Chris Lee, a physicist and writer at Ars Technica, crushes Amaral's allusions to quantum physics and the famed double-slit experiment, saying "Quantum mechanics does not provide you with the mental power to balance energies, find lay lines or cure syphilis. It does, unfortunately, seem to provide buzzwords to those prone to prey on the rich and gullible."

I am far from an expert on quantum physics and the vibrational frequency of body cells (whatever that means), but this episode rubbed me the wrong way, largely because it features a beautiful, successful celebrity partaking in what is currently an utterly unproven therapy.

Julianne Hough is a role model to many women who, after watching Hough writhe and wail on a table, might feel the need to do the same thing. I'm a big fan of Hough, but her part in this episode gave me sleazy celebrity endorsement vibes.

Energy healing, reiki or whatever you want to call it, falls comfortably into the "if this makes you feel better, go ahead" category. Energy healers don't actually touch you, and if they do it's just the graze of a fingertip, so the practice is harmless from a physical standpoint.

Theoretically, there's nothing wrong with seeing an energy healer if you can afford it and it makes you feel good. But the controversy comes from the fact that people who need real, proven psychological or physical treatments might ignore that need in favor of this trendy alternative.

Amaral fails to discuss when conventional medical or psychological treatment is the best option, only putting forth his method as the ultimate healing tactic. Amaral cannot mend a broken bone with his energy, nor can he remedy the neurotransmitter imbalances that cause severe depression.

It can be deadly, even, to ignore conventional treatment and rely on unproven therapies. Research has suggested that cancer patients who reject traditional care are less likely to overcome their illness.

But Amaral can, it seems, produce some level of catharsis: If that's what you need, feel free to lie on the table.

The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health provider regarding any questions you may have about a medical condition or health objectives.

Excerpt from:

The Goop Lab's 'energy healing' is no better than placebo, research proves - CNET

Aquinas, Stephen Hawking and the mind of God – Catholic Herald Online

I have spent 17 years as a part-time chaplain to a Catholic secondary school, but on the feast of St Thomas Aquinas, I celebrate a moving leaving Mass. The feast of the Doctor Communis, or universal teacher, seems a fitting day on which to finish. The Dominican order, of which Thomas was an early luminary, has as its simple motto Veritas.

Today the claim to know the truth, still more to teach some things as true, seems an affront to many, a monstrous piece of arrogance. How dare you claim a monopoly on the truth? it demands, usually when speaking about some moral judgement. And yet when the late Stephen Hawking wrote that quantum physics was close to unlocking the secrets of the universe and producing a Theory of Everything, allowing Man to know the mind of God, no one seem at all perturbed.

St Thomas is actually extremely cautious by comparison. His epistemology might best be summed up by paraphrasing the old Automobile Association marketing slogan: I dont know much about the reality of things, but I know Someone who does. In Thomass thought, for us to know the mind of God would not be the process of second-guessing with my intellect what God was like or what he was thinking. It would be my intellect participating in the knowledge God has of Himself and has chosen to share with me. I can know truth because it exists, rather than truth exists to the extent I know it. Indeed, St Thomas would go further and say that I am known by the Truth, loved by the Truth and this is why I can have confidence not in the limitless capacity of human reason to discover the truth, but the limitless depths of the truth which is discoverable when the intellect is opened to the divine light of wisdom.

This capacity for knowing truth in man is not a Promethean bid for freedom and autonomy. It is the very opposite. It is how God allows man a share in his own providence.

St Thomas says that the rational creature partakes of a share of providence. Far from being slave to the gods, God desires man to have a share of the Eternal Reason, whereby it [the rational creature] has a natural inclination to its proper act and end, and can therefore not merely speculate about truth as an exercise in intellectual improvement, but live by the truth and so become human, and more, like gods, knowing good from evil.

This faculty of reason is, of course, elevated and perfected by grace. When we use such phrases, it is easy to caricature grace as some kind of intellectual bolt-on adding extra capacity to reason. But again we are speaking about Gods own life overflowing into us. Grace does not make me more intellectually acute in terms of the power to compute; it makes me more able to understand truth, because God, in whom all things live and move and have their being (Acts 17:28), is sharing his life with me. It allows me in some proportional way to see things as He sees them. It is a participation in the Truth.

St Thomas, of course, was foremost a commentator on Scripture. His philosophy was essential to him because it gave a reasonable grounding to faith, without which man would not be able to know God freely, and he would be familiar with the idea of participation in the Truth from the Gospel which used to conclude every Mass. The Logos, the meaning or truth of all things, pitched his tent among us. To all who did accept him he gave power to become children of God (John 1:12). If man cannot know truth then he cannot know love.

Go here to read the rest:

Aquinas, Stephen Hawking and the mind of God - Catholic Herald Online

Galileo and Einstein Are Proven Right by Satellite-Based Experiments Confirming the Tower of Pisa Experiment – Science Times

Physics as science has two famous individuals intrinsic, to what physics is. One of them is Galileo Galilei, dropping two cannonballs of different sizes to prove a theory. Generations later, Albert Einstein had a part in the theory of relativity (ToR), objects fall at the same speed.

To find out if both Galileo and Einstein were correct, the object was dropped from satellite height. It was concluded that both had correction assumptions, no mistake about it. The velocity of the objects was two-trillionths of a percent, not far apart.

Despite the accuracy of the two-famed physicists, and Einstein's ToR is accepted by the scientific community. But, more are looking it the nooks and crannies for something that is "not" settled about it. It seems the norm for most scientists to pick at accepted notions and models until there are no more questions.

The big reveal, why all the nitpicking and uncertainty is that something is missing in the big picture. Science can be analogous to swiss chess, with too many holes that represent the uncertainty of how the cosmos works. Humankind is not so easily satisfied until all answers are complete.

Peter Wolf of the French National Center for Scientific Research's Paris Observatory. Made relevant remarks that capture what it means by not knowing how everything works.

Between quantum mechanics and general relativity which is that well-studied but obscure concept. But it gets a tailspin when the physics and calculation of the two, and even general theories are there. It is on the shaky ground most of the time and attempts to correlate both are not panning out.

One of the most celebrated and confusing parts of the theory of relativity (ToR) is the existence of dark matter or energy. All efforts to find this exotic framework of the universe have no proof (we've seen black holes).

Dark matter is everywhere, and it cannot be seen, even invisible to our most complex tools to see gamma rays popping everywhere. Maybe an indicator of dark matter is gravitation, still a small effect on the matter though. What if two objects of different massed do fall at different velocities? Then it might be dark matter influencing it.

To settle the question, scientists placed two cylindrical objects that are titanium and platinum, locked inside a satellite with a controlled environment. The test was conducted and here are the particulars of the said experiment.

a. Both objects were different masses to mirror Galileo's experiment.

b. The satellite was free-falling without forces influencing it.

c. Both cylinders were in an electromagnetic field, then dropped in a period of 100 to 200 hours.

d. How the cylinders were kept in place in the magnetic field to measure its fall.

e. A conclusion that was reached is both had the same rate of falling.

f. It was repeated for two years and in the exact conditions as before each simulated drop.


These experiments confirmed the finding of Galileo and Einstein'sassertion based on the theory of relativity (ToR). Just of the experiments trying to dissect the nature of the universe, also scientist is attempting more related experiments to test the extremes of knowledge.

Here is the original post:

Galileo and Einstein Are Proven Right by Satellite-Based Experiments Confirming the Tower of Pisa Experiment - Science Times

From quarks to quails can the different sciences be unified? – The Conversation UK

The world around us is populated by a vast variety of things ranging from genes and animals to atoms, particles and fields. While these can all be described by the natural sciences, it seems some can only be understood in terms of biology while others can only be explored using chemistry or physics. And when it comes to human behaviour, disciplines like sociology or psychology are the most useful.

This richness has intrigued philosophers, leading them to think about how the sciences are connected (or disconnected), but also about how things in the world relate to one another. Our new project, called the Metaphysical Unity of Science and funded by the European Research Council, is trying to answer these questions.

In general, philosophy distinguishes between two main questions in this area. First, there is the epistemological question of how specific sciences or theories are connected to one another. For example, how is biology related to physics or psychology to biology? This focuses on the state of our knowledge about the world. It involves looking at the concepts, explanations and methodologies of the various sciences or theories, and examining how they are related.

But there is also a metaphysical question of how things in the world are related to each other. Are they over and above the stuff that is postulated by fundamental physics? That is, are molecules, chairs, genes and dolphins just complex aggregates of subatomic particles and their fundamental physical interactions? If so, is living matter in any way different from inanimate matter?

This is a very difficult question to answer, not least because of the existential weight it carries. If humans, among other things, are just sums of physical parts, then we might wonder how we can make meaningful sense of consciousness, emotions and free will.

We could broadly map the existing philosophical positions within two extremes. On the one side, there is the reductionist stance which in one form claims that everything is made of and determined by physical building blocks there are no chairs, dolphins, economic inflation or genes, only particles and fields. This implies that sciences like chemistry and biology are just helpful tools to understand and manipulate the world around us.

In principle, the correct physics would explain everything that happens and exists in the world. It could therefore be, or help build, the basis for a unified theory. On this view, even something as complex as consciousness, which science may not (yet) properly explain, is ultimately down to the physical behaviour of the particles that make up the neurons in the brain.

On the other side, there is the pluralist stance which argues that everything in the world has an autonomous existence that we cant eliminate. While there might be a sense in which chemical, biological or economic entities are governed by physical laws, these entities are not mere aggregations of physical stuff. Rather, they exist in some sense over and above the physical.

This implies that the special sciences are not just tools that serve specific goals, but are accurate and true descriptions that identify real features of the world. Many pluralists are therefore sceptical about whether consciousness can ever be explained by physics suspecting that it may in fact be more than the sum of its physical parts.

There is evidence to support both reductionism and pluralism, but there are also objections against both. While many philosophers currently work on addressing these objections, others focus on finding new ways to answer these questions.

This is where the unity of science comes in. The notion originates from the reductionist side, arguing the sciences are unified. But some forms of unity reject reductionism and the strict hierarchies it invokes between the sciences, but nevertheless adhere to the broad thesis that the sciences are somehow interconnected or dependent on each other.

Our team, consisting of philosophers with an expertise in different areas of philosophy and science, is trying to find new ways to think about the unity of science. We want to identify the appropriate criteria that would suffice to convincingly claim that some form of unity holds between the natural sciences. We are also looking at case studies in order to investigate neighbouring sciences and how they depend on each other.

The outcomes of our project could have important implications that go beyond academic curiosity, ultimately helping science to progress. If there was indeed a way to describe how life is related to elementary particles, that would change the game completely.

So far, the project has conducted a number of case studies at the boundaries between biology and chemistry, and chemistry and physics. We are now starting to apply the results from these cases to the metaphysical framework for the unity of science. For example, one of our studies showed that many biological properties of proteins can be explained in terms of their chemical micro structure, rather than their environment. This doesnt prove that reductionism is true, but it does lend support to the view.

Another study investigated similar issues from the perspective of chemistry and quantum mechanics. Both theories assume that an isolated molecule has structure and is stable, but the study argued that you cannot prove this is definitely the case we describe this as an idealisation. It showed that both chemistry and quantum mechanics rely on making such idealisations and argued that identifying them can improve our metaphysical understanding of molecules.

Ultimately, understanding the interconnections of the natural sciences is a valuable source for understanding not only the world around us, but also ourselves. We are hoping that our investigation of these links can illuminate in new ways how things in the world relate to each other.

Visit link:

From quarks to quails can the different sciences be unified? - The Conversation UK

Found a scientific explanation for human stupidity – The Times Hub

Scientists were able to find an explanation for the phenomenon of human stupidity. The experts of University of science and technology of China based on the rules of quantum physics.

Psychologists have long been concerned with the question why people tend to do stupid things even when aware of their consequences. The theory implies that man is a rational creature and capable of simple and competent elections. The findings of scientists suggest that the reason the uncertainty inherent in quantum physics. As a basis was considered the paradigm of solving the problems, which is known as learning with quantum backup. The meaning of this technique in the presence of rewards or punishments for the outcomes.

This implies that when making decisions people take into account the fact of uncertainty, although may not understand it directly. For example, this is when the individual does something at random if the outcome of an event at the quantum level of precision cannot be predicted, but the probability of the best finals there.

Natasha Kumar is a general assignment reporter at the Times Hub. She has covered sports, entertainment and many other beats in her journalism career, and has lived in Manhattan for more than 8 years. She studies in University of Calcutta. Natasha has appeared periodically on national television shows and has been published in (among others) Hindustan Times, Times of India

Follow this link:

Found a scientific explanation for human stupidity - The Times Hub

Why string theory persists despite the knotty physics – Space.com

Paul M. Sutteris an astrophysicist at SUNY Stony Brook and the Flatiron Institute, host of Ask a Spaceman and Space Radio, and author of Your Place in the Universe.

String theory is a hypothetical idea that purports to be a theory of everything, able to explain the fundamental microscopic aspects of all of reality, from the forces of nature to the building blocks of all matter. It's a powerful idea, unfinished and untested, but one that has persisted for decades.

But the theory itself had rather inauspicious beginnings, employed to explain the strong nuclear force. And it wasn't very good at it.

Up until the 1960s, physicists were feeling pretty confident: They had discovered what they thought to be the fundamental constituents of matter (protons, neutrons and electrons). And they had recently accomplished the feat of unifying quantum mechanics and special relativity with what they called quantum electrodynamics (QED), which was a completely quantum description of the electromagnetic force.

But then, they started developing incredibly powerful particle colliders, and suddenly, they weren't really liking what they were finding. In these instruments, the physicists found a bunch of broken-up protons and neutrons, revealing that these particles were not fundamental at all. And what's worse, the colliders started spitting all sorts of new kinds of particles: mesons, pions, kaons, resonances, the works.

And governing them all was an apparently new force of nature: the strong force.

The tools used to develop QED were simply falling apart with this diverse host of particles popping out of the colliders. Physicists were at a loss and willing to try new ideas.

So some theorists started rummaging around in the attic, looking for any mathematical tools that might prove useful. And there they found an interesting set of ideas first proposed by Werner Heisenberg, one of the founders of quantum mechanics.

In the early days of quantum mechanics (the first half of the 20th century), it wasn't exactly clear what would be the best mathematical approach to explain all that weirdness. In the 1930s, Heisenberg suggested a rather extreme idea: instead of taking the normal classical physics approach of 1) write down the starting positions of all the particles involved in an interaction, 2) have a model of that interaction, and 3) follow the evolution through time of those particles, using your model to predict a result.

Instead, he argued, why don't we just skip all that work and develop a machine, called the scattering matrix, or s-matrix, that immediately jumps from the initial state to the final state, which is what we really want to measure. That machine encodes all the interaction in a giant box without actually worrying about the evolution of the system.

It was a cool idea but proved too difficult for anybody to get excited about, and it died on the vine until physicists got desperate in the '60s.

Reviving this approach to the newfound strong nuclear force, theorists extended and developed the s-matrix idea, finding that certain mathematical functions that repeated themselves were especially powerful.

Other theoretical physicists dived in, and couldn't resist the urge to give the framework a traditional interpretation in terms of time and space and following the evolution of particles. And there they found something surprising: in order to describe the strong force, it had to be carried by tiny, vibrating strings.

These strings appeared to be the basic building block of the strong force, with their quantum mechanical vibrations determining their properties in the microscopic world in other words, their vibrations made them look and act like tiny little particles.

In the end, this early version of string theory, known as baryonic string theory for the kinds of particles it tried to explain, didn't quite cut the mustard. It was fiendishly difficult to work with, making predictions nearly impossible. It also required the existence of particles that travel faster than the speed of light, called tachyons. That was a major problem for early string theory, since tachyons don't exist, and if they did they would flagrantly violate the incredibly successful special theory of relativity.

Oh, did I mention that baryonic string theory required 26 dimensions to make sense mathematically? That was a pretty big pill to swallow, considering that the universe has only four dimensions.

Ultimately, baryonic string theory died for two reasons. First, it made predictions that disagreed with experiments. That's a big no-no. And second, an alternative theory of the strong force, involving a new hypothetical particle called the quark and a force carrier called the gluon, was able to be folded into the quantum framework and successfully make predictions. This new theory, called quantum chromodynamics, or QCD, today remains our theory of the strong nuclear force.

And as for string theory, it mostly faded into the background. It would be revived in the 1970s, once theorists realized that it could describe more than the strong force and after they found a way to get rid of the tachyon predictions in the theory. The theory still needed extra dimensions, but physicists were able to reduce the number to a more reasonable-sounding 10. And with the realization that those dimensions could be tiny and curled up below the scale at which we could directly observe it, string theory didn't seem to wacky after all.

And today, that string theory also remains, still attempting to explain the strong force and so much more.

Learn more by listening to the episode "Is String Theory Worth It? (Part 2: Tuning the Strings)" on the Ask A Spaceman podcast, available oniTunesand on the Web athttp://www.askaspaceman.com. Thanks to John C., Zachary H., @edit_room, Matthew Y., Christopher L., Krizna W., Sayan P., Neha S., Zachary H., Joyce S., Mauricio M., @shrenicshah, Panos T., Dhruv R., Maria A., Ter B., oiSnowy, Evan T., Dan M., Jon T., @twblanchard, Aurie, Christopher M., @unplugged_wire, Giacomo S., Gully F. for the questions that led to this piece! Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and facebook.com/PaulMattSutter.

Follow us on Twitter @Spacedotcom and on Facebook.

Go here to read the rest:

Why string theory persists despite the knotty physics - Space.com

Nothingness Has Friction, And The Fastest Spinning Object Ever Made Could Measure It – ScienceAlert

Scientists have created the fastest spinning object ever made, taking them a big step closer to being able to measure the mysterious quantum forces at play inside 'nothingness'.

The record-breaking object in question is a tiny piece of silica, capable of whipping around billions of times per second - creating sufficient sensitivity that the team think they'll be able to use it to detect unfathomably small amounts of drag caused by the 'friction' within a vacuum.

Thescience of nothingnessis quickly becoming a big deal in physics, as we strive to understand how the Universe operates at its very foundations.

Researchers are now comfortable with the fact that empty space isn't empty at all - it's actually full of quantum fluctuations that we're only just now learning how to detect. But we're still struggling to find tools sensitive enough to measure these tiny forces at play.

Several years ago, researchers from Purdue University in the US took a step forward bydeveloping a method for measuring the torque or twisting force acting on a tiny oblong piece of diamond.

By using a laser to suspend the material in a vacuum, physicists had an incredibly finely-tuned device for working out the gentle nudge of surrounding fields.

"A change of the orientation of the nanodiamond caused the polarisation of the laser beam to twist," physicist Tongcang Li explained in 2016.

"Torsion balances have played historic roles in the development of modern physics. Now, an optically levitated ellipsoidal nanodiamond in a vacuum provides a new nanoscale torsion balance that will be many times more sensitive."

Three years later, Li and his team have replaced the diamond with tiny balls of silica just 150 nanometres in diameter, which were held aloft inside a vacuum chamber with a 500 milliwatt laser.

Using polarised pulses from a second laser, the tiny silica blobs could be set spinning.

And spin they did, with the dumbbell-shaped particles reaching an astonishing 300 billion rpm, breaking the limits on previous attempts which barely managed one fifth of that speed.

Revolutions aside, it was the sensitivity of the rotation's forces that the researchers were aiming to improve upon.

While this experiment relies on modern technology, it's has its roots in an experiment that's centuries old.

At the end of the 18th century, British scientist Henry Cavendish set out to put hard figures to Newton's laws on gravity by attempting to measure the force using two pairs of lead weights.

Two relatively light lead spheres balanced on either end of a 1.8-metre-wide beam were hung from a wire near a second pair of heavy masses locked in place. A measure of the torsion on the wire provided the first real measure of a gravitational constant.

This new, nanosized version of Cavendish's experiment could be so sensitive, it could theoretically be used to measure the faint tugging of electromagnetic fields that creates a kind of friction in empty space, formed by the inherent uncertainty of quantum physics.

"A fast-rotating neutral nanoparticle can convert quantum and thermal vacuum fluctuations to radiation emission," the researchers write in their report.

"Because of this, the electromagnetic vacuum behaves like a complex fluid and will exert a frictional torque on a nanorotor."

Twisting force of torsion is measured in units called 'newton metres', where one newton metre is a newton of force applied to a point of leverage from one metre away.

An experiment in 2016 developed a method that could measure torque as sensitive as around 3 x 10^-24 newton metres, a process that required temperatures just a fraction of a degree above absolute zero.

Li and his team blitzed this previous record as well, comparing the way the silica blobs spun between laser cycles to come up with torque measurements of just 1.2 x 10^-27 Newton metres. At room temperature too, no less.

In the future, experiments varying the make-up of the spinning material as well as environmental factors such as temperature and objects in the vicinity, could be used to finallymeasure how undisturbed quantum fieldsbubble away at the lowest energies.

This research was published in Nature Nanotechnology.

See the original post:

Nothingness Has Friction, And The Fastest Spinning Object Ever Made Could Measure It - ScienceAlert

Focus: Detecting the Rotation of a Quantum Spin – Physics

January 17, 2020• Physics 13, 5

Researchers detected the effect of rotating a crystal on the spin of an embedded particle, a result that could lead to ultrasensitive rotation sensors.

A. Wood/Univ. of Melbourne

A. Wood/Univ. of Melbourne

A new experiment has demonstrated that rotating a quantum object affects its spin in a way that can be detected. Researchers whirled a crystal at 200,000 rpm and detected the effects on a single quantum spin within the crystal. The finding was theoretically expected, but it could lead to new techniques for sensing rotation at the nanometer scale.

Particles such as electrons and protons have fixed values of quantum spin, an intrinsic angular momentum that does not correspond to physical rotation of the particle as it does for classical objects. Place such a particle (a single spin) in a magnetic field, and its spin vector rotates, or precesses, around the direction of the field vector, rather like a gyroscope. The speed of precession depends on the magnetic-field strength. Many influences, such as the fields of neighboring atoms, can affect the field that the spin experiences and thus the precession speed. If the particle has spin 1/2, then in an upward-pointing magnetic field it has a lower-energy (spin-up) state and a higher-energy (spin-down) state. Electromagnetic radiation with the same frequency as the precession can excite transitions between these two states.

It has been known for some time that the physical rotation of quantum spins (say, as part of a crystal) can alter the rate at which they precess, and some researchers hope that this fact might lead to a device for ultrasensitive detection of rotation [1]. Previous experiments have shown the effect for a large collection of spins, but doing so with a single spin would provide the ultimate in miniaturization and high spatial resolution. How do you get a single spin to tell you that its rotating? asks Alexander Wood of the University of Melbourne in Australia. The challenge, he says, is to show unambiguously that it is the rotation, and not some other influence such as a stray magnetic field, that has produced the effect on precession. The team overcame this difficulty by developing a complicated experiment that shows the effect of rotation indirectly.

Wood and his colleagues looked at diamond-crystal defects called nitrogen vacancy centers (NVs): places in the lattice where a lone nitrogen atom has replaced a carbon atom immediately adjacent to a vacancy (missing atom). This replacement leaves an unpaired electron, and it interacts with other electrons to create what is effectivelyin the appropriate magnetic fieldsan isolated spin-1/2 particle. If the NVs are very sparse, each spin can be seen and studied individually.

The team attached a small slab of diamond containing NVs to a motor spinning at 200,000 rpm in an external magnetic field. They looked at one NV and applied a technique called optically detected spin-echo magnetic resonance. For each rotation cycle (lasting a fraction of a millisecond), the team placed the spin in a lower-energy state using a green light pulse and then hit the spin with three carefully-timed microwave pulses. Finally, at the end of each rotation cycle, they measured the fluorescence emitted, which signaled whether the spin had been excited to the higher-energy state.

The orientations of the microwaves field vectors (polarizations) with respect to the spin determined the probability of the spin being excited to the higher-energy level. Using a theory accounting for this effect, the precession of the spin in the applied and microwave fields, and other effects, the team predicted the change in fluorescence as they varied the microwave polarization angle. These predictions agreed with the experiments. It turned out that the effect of varying the microwave orientation provided a signature in the fluorescence that uniquely signaled the NVs rotation and could not be explained by other factors.

The current experiment is a proof-of-principle with limited sensitivity, Wood says, but in the future, similar measurements could be used to detect rotation with high precision. The low sensitivity results from the tiny sensor volume, essentially a single atom. But he says that this probe size could also be an advantage: a nanometer-sized diamond containing a single NV might act as a probe for sensing rotation of living cells or biological fluids.

Pauli Kehayias of Sandia National Laboratory in Albuquerque, New Mexico, says that some researchers are already trying to use diamond NVs as the basis for a gyroscope that detects slow rotation [24], for example in navigation. But he says that using individual spins could lead to fast-rotation sensors that work at the atomic scale. In addition, a single spin would avoid complications from the inhomogeneity that might exist in a group of many NVs.

This research is published in Physical Review Letters.

Philip Ball

Philip Ball is a freelance science writer in London. His latest book isHow To Grow a Human (University of Chicago Press, 2019).

See the rest here:

Focus: Detecting the Rotation of a Quantum Spin - Physics

Are the aliens us? UFOs may be piloted by time-traveling humans, book argues – Space.com

Unidentified flying objects (UFOs) have captured the public's attention over the decades. As exoplanet detection is on the rise, why not consider that star-hopping visitors from afar might be buzzing through our friendly skies by taking an interstellar off-ramp to Earth?

On the other hand, could those piloting UFOs be us our future progeny that have mastered the landscape of time and space? Perhaps those reports of people coming into contact with strange beings represent our distant human descendants, returning from the future to study us in their own evolutionary past.

The idea of us being them has been advanced before. But a recent book, "Identified Flying Objects: A Multidisciplinary Scientific Approach to the UFO Phenomenon" (Masters Creative LLC, 2019), takes a fresh look at this prospect, offering some thought-provoking proposals.

Related: UFO Watch: 8 Times the Government Looked for Flying Saucers

The book was written by Michael Masters, a professor of biological anthropology at Montana Technological University in Butte. Masters thinks that given the accelerating pace of change in science, technology, and engineering it is likely that humans of the distant future could develop the knowledge and machinery necessary to return to the past.

The objective of the book, Masters said, is to spur a new and more informed discussion among believers and skeptics alike.

"I took a multidisciplinary approach in order to try and understand the oddities of this phenomenon," Masters told Space.com. "Our job as scientists is to be asking big questions and try to find answers to unknown questions. There's something going on here, and we should be having a conversation about this. We should be at the forefront of trying to find out what it is."

Dubbing these purported visitors "extratempestrials," Masters notes that close-encounter accounts typically describe UFO tenants as bipedal, hairless, human-like beings with large brains, large eyes, small noses and small mouths. Further, the creatures are often said to have the ability to communicate with us in our own languages and possess technology advanced beyond, but clearly built upon, today's technological prowess.

Masters believes that through a comprehensive analysis of consistent patterns of long-term biocultural change throughout human evolution as well as recent advances in our understanding of time and time travel we may begin to consider this future possibility in the context of a currently unexplained phenomenon.

"The book ties together those known aspects of our evolutionary history with what is still an unproven, unverified aspect of UFOs and aliens," he said.

But why not argue that ET is actually a traveler from across the vastness of space, from a distant planet? Wouldn't that be a simpler answer?

"I would argue it's the opposite," Masters responded. "We know we're here. We know humans exist. We know that we've had a long evolutionary history on this planet. And we know our technology is going to be more advanced in the future. I think the simplest explanation, innately, is that it is us. I'm just trying to offer what is likely the most parsimonious explanation."

Related: 5 Bold Claims of Alien Life

As an anthropologist who has worked on and directed numerous archaeological digs in Africa, France and throughout the United States, Masters observes that it is easy to conceptualize just how much more could be learned about our own evolutionary history if we currently possessed the technology to visit past periods of time.

"The alleged abduction accounts are mostly scientific in nature. It's probably future anthropologists, historians, linguists that are coming back to get information in a way that we currently can't without access to that technology," Masters said.

"That said, I do think that some component of it is also tourism," he added. "Undoubtedly in the future, there are those that will pay a lot of money to have the opportunity to go back and observe their favorite period in history. Some of the most popular tourist sites are the pyramids of Giza and Machu Picchu in Peru old and prehistoric sites."

Masters calls his UFO research "an evolving project."

"There's certainly still missing pieces of the puzzle," he said. "There are aspects of time that we don't yet understand. Wanted is a theory of quantum gravity, and we can meld general relativity and quantum mechanics. I'm just trying to put forth the best model I can based on current scientific knowledge. Hopefully, over time, we can continue to build on this."

"Masters postulates that using a multidisciplinary scientific approach to the UFO phenomenon will be what it takes to solve this mystery once and for all, and I couldn't agree more," said Jan Harzan, executive director of the nonprofit Mutual UFO Network (MUFON).

"The premise that UFOs are us from the future is one of many possibilities that MUFON is exploring to explain the UFO phenomenon. All we know for sure is that we are not alone," Harzan added. "Now the question becomes, 'Who are they?' And Masters makes a great case for the time-traveler hypothesis."

But not everybody is on board with the idea, as you might imagine.

"There is nothing in this book to take seriously, as it depends on the belief that 'time travel' is not only possible, but real," said Robert Sheaffer, a noted UFO skeptic.

Supposedly our distant descendants have mastered time travel, Sheaffer said, and have traveled back in time to visit us. "So, according to Masters, you just spin something fast enough and it will begin to warp space, and even send stuff backwards in time. This is a highly dubious claim," he said.

Moreover, Sheaffer said that Masters tries to deduce aliens' evolutionary history from witness descriptions, "suggesting that he takes such accounts far too literally."

Related: 7 Things Most Often Mistaken for UFOs

David Darling is a British astronomer and science writer who has authored books on a sweeping array of topics from gravity, Zen physics and astrobiology to teleportation and extraterrestrial life.

"I've often thought that if some UFOs are 'alien' craft, it's just as reasonable to suppose that they might be time machines from our own future than that they're spacecraft from other stars," Darling told Space.com. "The problem is the 'if.'

Darling said that, while some aerial phenomena have eluded easy identification, one of the least likely explanations, it seems to him, is that they're artificial and not of this world.

"Outside of the popular mythos of flying saucers and archetypal, big-brained aliens, there's precious little credible evidence that they exist," Darling said. "So, my issue with the book is not the ingenuity of its thesis, but the fact that there's really no need for such a thesis in the first place."

Larry Lemke, a retired NASA aerospace engineer with an interest in the UFO phenomenon, finds the prospect of time-travelling visitors from the future intriguing.

"The one thing that has become clear over the decades of sightings, if you believe the reports, is that these objects don't seem to be obeying the usual laws of aerodynamics and Newtonian mechanics," Lemke said, referring to the relationship, in the natural world, between force, mass and motion.

Toss in for good measure Einstein's theory of general relativity and its consequences, like wormholes and black holes, along with other exotic physics ideas such as the Alcubierre warp-drive bubble.

"There's a group of thinkers in the field of UFOs that point out that phenomena reported around some UFOs do, in fact, look exactly like general relativity effects," Lemke said. Missing time is a very common one."

Lemke said that the idea that somebody has figured out how to manipulate space-time, on a local scale with a low-energy approach, would explain a lot of things across the UFO phenomenon, including those baffling Tic-Tac-shaped objects recently reported by jet-fighter pilots and radar operators.

"No matter how much knowledge we have, how much we think we know, there's always some frontier beyond," he said. "And to understand that frontier is getting more and more esoteric."

Leonard David is the author of the recently released book, "Moon Rush: The New Space Race" published by National Geographic in May 2019. A longtime writer for Space.com, David has been reporting on the space industry for more than five decades. Follow us on Twitter @Spacedotcom or Facebook.

Read the original post:

Are the aliens us? UFOs may be piloted by time-traveling humans, book argues - Space.com

‘How can we compete with Google?’: the battle to train quantum coders – The Guardian

There is a laboratory deep within University College London (UCL) that looks like a cross between a rebel base in Star Wars and a scene imagined by Jules Verne. Hidden within the miles of cables, blinking electronic equipment and screens is a gold-coloured contraption known as a dilution refrigerator. Its job is to chill the highly sensitive equipment needed to build a quantum computer to close to absolute zero, the coldest temperature in the known universe.

Standing around the refrigerator are students from Germany, Spain and China, who are studying to become members of an elite profession that has never existed before: quantum engineering. These scientists take the developments in quantum mechanics over the past century and turn them into revolutionary real-world applications in, for example, artificial intelligence, self-driving vehicles, cryptography and medicine.

The problem is that there is now what analysts call a quantum bottleneck. Owing to the fast growth of the industry, not enough quantum engineers are being trained in the UK or globally to meet expected demand. This skills shortage has been identified as a crucial challenge and will, if unaddressed, threaten Britains position as one of the worlds top centres for quantum technologies.

The lack of access to a pipeline of talent will pose an existential threat to our company, and others like it, says James Palles-Dimmock, commercial director of London- and Oxford-based startup Quantum Motion. You are not going to make a quantum computer with 1,000 average people you need 10 to 100 incredibly good people, and thatll be the case for everybody worldwide, so access to the best talent is going to define which companies succeed and which fail.

This doesnt just matter to niche companies; it affects everyone. If the UK is to remain at the leading edge of the world economy then it has to compete with the leading technological and scientific developments, warns Professor Paul Warburton, director of the CDT in Delivering Quantum Technologies. This is the only way we can maintain our standard of living.

This quantum bottleneck is only going to grow more acute. Data is scarce, but according to research by the Quantum Computing Report and the University of Wisconsin-Madison, on one day in June 2016 there were just 35 vacancies worldwide for commercial quantum companies advertised. By December, that figure had leapt to 283.

In the UK, Quantum Motion estimates that the industry will need another 150200 quantum engineers over the next 18 months. In contrast, Bristol Universitys centre for doctoral training produces about 10 qualified engineers each year.

In the recent past, quantum engineers would have studied for their PhDs in small groups inside much larger physics departments. Now there are interdisciplinary centres for doctoral training at UCL and Bristol University, where graduates in such subjects as maths, engineering and computer science, as well as physics, work together. As many of the students come with limited experience of quantum technologies, the first year of their four-year course is a compulsory introduction to the subject.

Rather than work with three or four people inside a large physics department its really great to be working with lots of people all on quantum, whether they are computer scientists or engineers. They have a high level of knowledge of the same problems, but a different way of thinking about them because of their different backgrounds, says Bristol student Naomi Solomons.

While Solomons is fortunate to study on an interdisciplinary course, these are few and far between in the UK. We are still overwhelmingly recruiting physicists, says Paul Warburton. We really need to massively increase the number of PhD students from outside the physics domain to really transform this sector.

The second problem, according to Warburton, is competition with the US. Anyone who graduates with a PhD in quantum technologies in this country is well sought after in the USA. The risk of lucrative US companies poaching UK talent is considerable. How can we compete with Google or D-Wave if it does get into an arms race? says Palles-Dimmock. They can chuck $300,000-$400,000 at people to make sure they have the engineers they want.

There are parallels with the fast growth of AI. In 2015, Ubers move to gut Carnegie Mellon Universitys world-leading robotics lab of nearly all its staff (about 50 in total) to help it build autonomous cars showed what can happen when a shortage of engineers causes a bottleneck.

Worryingly, Doug Finke, managing editor at Quantum Computing Report, has spotted a similar pattern emerging in the quantum industry today. The large expansion of quantum computing in the commercial space has encouraged a number of academics to leave academia and join a company, and this may create some shortages of professors to teach the next generation of students, he says.

More needs to be done to significantly increase the flow of engineers. One way is through diversity: Bristol has just held its first women in quantum event with a view to increasing its number of female students above the current 20%.

Another option is to create different levels of quantum engineers. A masters degree or a four-year dedicated undergraduate degree could be the way to mass-produce engineers because industry players often dont need a PhD-trained individual, says Turner. But I think you would be training more a kind of foot soldier than an industry leader.

One potential roadblock could be growing threats to the free movement of ideas and people. Nations seem to be starting to get a bit protective about what theyre doing, says Prof John Morton, founding director of Quantum Motion. [They] are often using concocted reasons of national security to justify retaining a commercial advantage for their own companies.

Warburton says he has especially seen this in the US. This reinforces the need for the UK to train its own quantum engineers. We cant rely on getting our technology from other nations. We need to have our own quantum technology capability.

Read the original:

'How can we compete with Google?': the battle to train quantum coders - The Guardian