Partnership between the University of Chicago and Argonne National Laboratory Leads to New Methods of Quantum Communication – The Chicago Maroon

The University of Chicago, working with scientists from Argonne National Laboratory, has developed a new fiber-optic quantum loop to expand quantum communication experiments. The 52-mile-long loop, consisting of two 26-mile cables that link Argonne to the Illinois Tollway near suburban Bolingbrook, is one of the longest ground-based channels for quantum communication in the country.

This loop network gives researchers a platform for replicable testing of using quantum entanglement to send unhackable information over long distances, according to UChicago News. Researchers with both Argonne and UChicago plan to utilize this loop to examine and harness the properties of quantum entanglement. This phenomenon links two (or more) particles so that they are in a shared state such that whatever happens to one immediately affects the other, no matter how far apart they have traveled.

David Awschalom, principal investigator and professor of molecular engineering at the University of Chicago, believes that the establishment of the loop will help both the city of Chicago and the nation to build a similar network to securely transmit information and data over long distances.

The loop will enable us to identify and address challenges in operating a quantum network and can be scaled to test and demonstrate communication across even greater distances to help lay the foundation for a quantum internet, he said.

According to Argonne National Laboratory, researchers performed a series of experiments aimed to transmit signals using photon emission from ensembles of ions in the loop. These ions can serve as a form of memory for the loop. By creating this functional quantum memory, researchers can optimize quantum communication to form a quantum internet, a highly secure network of quantum computers and other quantum devices.

The research performed by the University of Chicago and Argonne National Laboratory will lead to optimization of data collection and the internet, according to Paul Kearns, director of Argonne National Laboratory.

Along with the UChicago quantum loop, Argonne National Laboratory is working with Fermi National Accelerator Laboratory to plan and develop a similar two-way quantum link network. When the two projects are connected, it will form one of the longest networks that can be used to send secure information using quantum physics, according to Argonne National Laboratory.

Both Argonne National Laboratory and the University of Chicago are members of the Chicago Quantum Exchange, a community hub of researchers aimed at advancing academic and industrial efforts to understand quantum information. Funding for the quantum loop was provided by the U.S. Department of Energy.

Original post:

Partnership between the University of Chicago and Argonne National Laboratory Leads to New Methods of Quantum Communication - The Chicago Maroon

Has Physics Lost Its Way? – The New York Times

Of course, Lindley reminds us, what constitutes a good scientific theory depends on the scientific context of its time. Surely not, you might think; dont proper scientific theories have to satisfy timeless criteria such as explaining all the phenomena the theories they displace are able to, being able to make testable predictions, being repeatable, and so on? Well, yes, but here is where we get to Lindleys central thesis: Contemporary theoretical physicists seem to have reverted to the idealized philosophy of Platonism. As he puts it, The spirit of Plato is abroad in the world again. Is this true? Platos stance was that it was enough to think about the universe. Surely, we can do better than that today, with our far more powerful mathematical tools and an abundance of empirical data to test our theories against. No physicist I know would say that to understand the laws of nature it is sufficient to think about them.

While its clear that nature obeys mathematical rules, a happy middle ground between Plato and Aristotle would seem to be preferable: to make the math our servant, not our master. After all, mathematics alone cannot entirely explain reality. Without a narrative to superimpose on the math, the equations and formulas lack a connection with physical reality. Lindley makes this point forcefully: I find it essentially impossible to think of physical theories and laws only in mathematical terms. I need the help of a physical picture to make sense of the math. About this, I am in total agreement. The mathematics can be as pretty and aesthetically pleasing as you like, but without a physical correlative, then that is all it is: pretty math.

According to Lindley, something happened in 20th-century theoretical physics that caused some in the field to reach back to the ancient justifications for mathematical elegance as a criterion for knowledge, even truth. In 1963, the great English quantum physicist Paul Dirac famously wrote, It is more important to have beauty in ones equations than to have them fit an experiment. To be fair, Dirac was a rather special individual, since many of his mathematical predictions turned out to be correct, such as the existence of antimatter, which was discovered a few years after his equation predicted it. But other physicists took this view to an extreme. The Hungarian Hermann Weyl went as far as to say, My work always tried to unite the truth with the beautiful, and when I had to choose one or the other, I usually chose the beautiful. Lindley argues that this attitude is prevalent among many researchers working at the forefront of fundamental physics today and asks whether these physicists are even still doing science if their theories do not make testable predictions. After all, if we can never confirm the existence of parallel universes, then isnt it just metaphysics, however aesthetically pleasing it might be?

But Lindley goes further by declaring that much fundamental research, whether in particle physics, cosmology or the quest to unify gravity with quantum mechanics, is based purely on mathematics and should not be regarded as science at all, but, rather, philosophy. And this is where I think he goes too far. Physics has always been an empirical science; just because we dont know how to test our latest fanciful ideas today does not mean we never will.

Excerpt from:

Has Physics Lost Its Way? - The New York Times

The three-body problem shows us why we can’t accurately calculate the past – Universe Today

Our universe is driven by cause and effect. What happens now leads directly to what happens later. Because of this, many things in the universe are predictable. We can predict when a solar eclipse will occur, or how to launch a rocket that will take a spacecraft to Mars. This also works in reverse. By looking at events now, we can work backward to understand what happened before. We can, for example, look at the motion of galaxies today and know that the cosmos was once in the hot dense state we call the big bang.

This is possible thanks to a property of physics known as time symmetry. The laws of physics work the same way regardless of the direction of time. If you watch an animation of an orbiting planet, you have no way to know whether it is running forward or backward. The causality of physics allows for causes to be effects and effects to be causes. There is no preferred direction for time.

But hold on, you might say, what about thermodynamics and entropy? My coffee always cools down while sitting on my desk, and if I drop my mug on the floor I cant unshatter it. Doesnt that give time a unique direction? Not quite.

Thermodynamics is statistical in nature. Entropy will indeed tend to increase over time, but thats because there are many more possible disordered states than ordered ones. Thats a bit of an oversimplification, but its good enough for everyday life. If I toss a handful of sand in the air, the grains will almost certainly land on the ground in some random pattern. However, there is an infinitely small chance that they land in a perfect circle. The odds are so tiny we will never see it happen, but it isnt impossible. Chaotic systems are nearly impossible to predict, but we could (in principle) predict them with enough information. Because of time symmetry, we could also work back to the initial state of a chaotic system.

This is known as retrodiction. It is the ability to predict past events from current ones, and it lies at the heart of fundamental physics. One thing weve learned about quantum physics, classical physics, and thermodynamics is that they all come down to information. The state of any system contains all the information you need to predict what will happen next. This means that information is a conserved quantity, and like energy cant be created or destroyed.

At least thats what we think. One of the big unanswered questions is whether or not conservation of information applies to black holes. If I toss my personal diary into a black hole it can never escape. Once it crosses the event horizon, the diary can never escape. Does that mean my deepest secrets are forever safe? This information paradox has huge implications for quantum gravity, but thats a story for another time.

But could retrodiction fail even without invoking event horizons or quantum physics? Since classical physics is deterministic, retrodiction should always be possible. But a new study argues against that idea.

In this work, the team ran computer models of three massive black holes in a gravitational dance. With each simulation, they shifted the initial positions of the black holes to see how similar or different their motions were over time. This kind of three-body problem is a classic example of a chaotic system. There is no exact solution for three-body problems, so its a great way to study how predictable a system might be.

As you might expect, when you vary the initial conditions you can get very different results. Small differences lead to large ones over time. Weve known this about chaotic systems for decades. But the team found that the tiniest shifts can lead to big variations. When they made the shifts as small as a plank length, most of the simulations remained really consistent, but about 5% of them still varied widely.

This is interesting because a Plank length is about the limit of scale for quantum systems. Smaller than that and known laws of physics break down. Since the bodies in the mode are large black holes, this isnt some effect of quantum uncertainty. It also isnt some uncertainty in their simulation. The unpredictability of this three-body system seems to be intrinsic.

So we cant always predict the future. What else is new? But since the laws of gravity are time-reversible, this also means for some systems we cant know their origin. Not even in principle.

Before you think this throws all of science out the window, keep in mind that this is about the limit of what can be known. We can still understand the history of the universe by what we see today. But this could mean that information isnt always conserved, even in a simple classical system. If thats true, it could change the way we understand the most fundamental essence of physics.

Reference: Boekholt, T. C. N., S. F. Portegies Zwart, and Mauri Valtonen. Gargantuan chaotic gravitational three-body systems and their irreversibility to the Planck length.

Like Loading...

Continue reading here:

The three-body problem shows us why we can't accurately calculate the past - Universe Today

Researchers move closer to faster-charging Li-ion batteries; real-time tracking of Li ions in LTO – Green Car Congress

A team of scientists led by the US Department of Energys (DOE) Brookhaven National Laboratory and Lawrence Berkeley National Laboratory has captured in real time how lithium ions move in lithium titanate (LTO), a fast-charging battery electrode material made of lithium, titanium, and oxygen. They discovered that distorted arrangements of lithium and surrounding atoms in LTO intermediates (structures of LTO with a lithium concentration in between that of its initial and end states) provide an express lane for the transport of lithium ions.

Their discovery, reported in Science, could provide insights into designing improved battery materials for the rapid charging of electric vehicles and portable consumer electronics such as cell phones and laptops.

Consider that it only takes a few minutes to fill up the gas tank of a car but a few hours to charge the battery of an electric vehicle Figuring out how to make lithium ions move faster in electrode materials is a big deal, as it may help us build better batteries with greatly reduced charging time.

co-corresponding author Feng Wang, a materials scientist in Brookhaven Labs Interdisciplinary Sciences Department

Graphite is commonly employed as the anode in state-of-the-art lithium-ion batteries, but for fast-charging applications, LTO is an appealing alternative. LTO can accommodate lithium ions rapidly, without suffering from lithium plating (the deposition of lithium on the electrode surface instead of internally).

As LTO accommodates lithium, it transforms from its original phase (Li4Ti5O12) to an end phase (Li7Ti5O12)both of which have poor lithium conductivity. Thus, scientists have been puzzled as to how LTO can be a fast-charging electrode.

Reconciling this seeming paradox requires knowledge of how lithium ions diffuse in intermediate structures of LTO (those with a lithium concentration in between that of Li4Ti5O12 and Li7Ti5O12), rather than a static picture derived solely from the initial and end phases. However, performing such characterization is a non-trivial task. Lithium ions are light, making them elusive to traditional electron- or x-ray-based probing techniquesespecially when the ions are shuffling rapidly within active materials, such as LTO nanoparticles in an operating battery electrode.

In this study, the scientists were able to track the migration of lithium ions in LTO nanoparticles in real time by designing an electrochemical cell to operate inside a transmission electron microscope (TEM). This electrochemical cell enabled the team to conduct electron energy-loss spectroscopy (EELS) during battery charge and discharge.

In EELS, the change in energy of electrons after they have interacted with a sample is measured to reveal information about the samples local chemical states. In addition to being highly sensitive to lithium ions, EELS, when carried out inside a TEM, provides the high resolution in both space and time needed to capture ion transport in nanoparticles.

A schematic of the mini electrochemical cell that the scientists built to chase lithium ions (orange) moving in the lattice of LTO (blue).

The team tackled a multifold challenge in developing the electrochemically functional cellmaking the cell cycle like a regular battery while ensuring it was small enough to fit into the millimeter-sized sample space of the TEM column. To measure the EELS signals from the lithium, a very thin sample is needed, beyond what is normally required for the transparency of probing electrons in TEMs.

co-author and senior scientist Yimei Zhu, who leads the Electron Microscopy and Nanostructure Group in Brookhavens Condensed Matter Physics and Materials Science (CMPMS) Division

The resulting EELS spectra contained information about the occupancy and local environment of lithium at various states of LTO as charge and discharge progressed.

To decipher the information, scientists from the Computational and Experimental Design of Emerging Materials Research (CEDER) group at Berkeley and the Center for Functional Nanomaterials (CFN) at Brookhaven simulated the spectra. On the basis of these simulations, they determined the arrangements of atoms from among thousands of possibilities. To determine the impact of the local structure on ion transport, the CEDER group calculated the energy barriers of lithium-ion migration in LTO, using methods based on quantum mechanics.

Computational modeling was very important to understand how lithium can move so fast through this material. As the material takes up lithium, the atomic arrangement becomes very complex and difficult to conceptualize with simple transport ideas. Computations were able to confirm that the crowding of lithium ions together makes them highly mobile.

co-corresponding author and CEDER group leader Gerbrand Ceder, Chancellors Professor in the Department of Materials Science and Engineering at UC Berkeley and a senior faculty scientist in the Materials Science Division at Berkeley Lab

The teams analysis revealed that LTO has metastable intermediate configurations in which the atoms are locally not in their usual arrangement. These local polyhedral distortions lower the energy barriers, providing a pathway through which lithium ions can quickly travel.

A movie of lithium ions quickly moving along easy pathways in intermediate configurations of LTO. Imagine the LTO lattice as a racecar obstacle course that the lithium ions have to navigate around. In its original phase (Li4Ti5O12) and the end phase it transforms into to accommodate lithium ions (Li7Ti5O12), LTO has atomic configurations in which many obstacles are in the way. Thus, lithium ions must travel slowly through the obstacle course. But in intermediate configurations of LTO (such as the Li5+xTi5O12 shown in the movie), local distortions in the arrangement of atoms surrounding lithium occur along the boundary of these two phases. These distortions slightly shovel the obstacles out of the way, giving rise to a fast lane for lithium ions to speed through.

Unlike gas freely flowing into your cars gas tank, which is essentially an empty container, lithium needs to fight its way into LTO, which is not a completely open structure. To get lithium in, LTO transforms from one structure to another. Typically, such a two-phase transformation takes time, limiting the fast-charging capability. However, in this case, lithium is accommodated more quickly than expected because local distortions in the atomic structure of LTO create more open space through which lithium can easily pass. These highly conductive pathways happen at the abundant boundaries existing between the two phases.

Feng Wang

Next, the scientists will explore the limitations of LTOsuch as heat generation and capacity loss associated with cycling at high ratesfor real applications. By examining how LTO behaves after repeatedly absorbing and releasing lithium at varying cycling rates, they hope to find remedies for these issues. This knowledge will inform the development of practically viable electrode materials for fast-charging batteries.

We look forward to examining transport behaviors in fast-charging electrodes more closely by fitting our newly developed electrochemical cell to the powerful electron and x-ray microscopes at Brookhavens CFN and National Synchrotron Light Source II (NSLS-II). By leveraging these state-of-the-art tools, we will be able to gain a complete view of lithium transport in the local and bulk structures of the samples during cycling in real time and under real-world reaction conditions.

Feng Wang

The scientists carried out the experimental work using TEM facilities in Brookhavens CMPMS Division and the theoretical work using computational resources of Brookhavens Scientific Data and Computing Center (part of the Computational Science Initiative). Both CFN and NSLS-II are DOE Office of Science User Facilities. The research was supported by the Laboratory Directed Research and Development program at Brookhaven; DOEs Office of Energy Efficiency and Renewable Energy (EERE), Vehicle Technologies Office (VTO), under the Advanced Battery Materials Research program; DOEs Offices of Basic Energy Sciences and EERE VTO; and the National Science Foundation.

Resources

Wei Zhang, Dong-Hwa Seo, Tina Chen, Lijun Wu, Mehmet Topsakal, Yimei Zhu, Deyu Lu, Gerbrand Ceder, Feng Wang (2020) Kinetic pathways of ionic transport in fast-charging lithium titanate Science Vol. 367, Issue 6481, pp. 1030-1034 doi: 10.1126/science.aax3520

View original post here:

Researchers move closer to faster-charging Li-ion batteries; real-time tracking of Li ions in LTO - Green Car Congress

How Can Spontaneous Spin Polarization be Observed in Different Nanomaterials? – AZoNano

Image Credit: GiroScience/Shutterstock.com

There are several ways spin polarization is observed in nanomaterials, but until recent years it was always considered an impossibility. Spin polarization remains essential in understanding and applying spintronics, but scientists are regularly surprised by some of the seemingly random effects that occur during experimentation with different nanomaterials.

How and Why?

"Where does this spin polarization come from? The electrons are interacting with one another, and molybdenum disulfide also exhibits a very weak spin orbit coupling. These two factors presumably have a massive influence on the system.

Roch, J. via Phys.org 2019

A 1966 theorem assumed the absence of spin-orbit interaction, discovering spontaneous spin polarization an interesting puzzle for many in the scientific community.

Before explaining how it can be observed in different nanomaterials, it is important to understand the benefits of spontaneous spin polarization and why many are excited about it.

There are two significant benefits.

In 2016, scientists presented an experimental study of narrow line Dysprosium MOTs. The study revealed that by combining radiation pressure and gravitational force, spontaneous polarization of the electronic spin of the particle occurs.

The spin was measured using a Stern-Gerlach separation of spin levels. This revealed that the gas becomes almost entirely spin-polarized for large laser frequency de-tunings. In this instance, the laser de-tunings reach the optimal operation of the MOT with samples of typically 3 x 108 atoms. The spin polarization reduced the complexity of the radiative cooling description allowing for a simple model that enabled scientists to take measurements.

It would be hard to talk about spontaneous spin polarization without mentioning graphene, one of the most used and sought after nanomaterials. The zig-zag-like atomic structure at the edges of graphene produces a flat energy band. Because electrons have infinite mass at the flat band, they all localize at the edges with the highest density. Here, spontaneous polarization can be observed because of the mutual Coulomb interaction. This occurs despite a material consisting of only carbon atoms with sp2 bonds.

Physicists from the University of Basel discovered one of the most interesting examples of spontaneous spin polarization in their studies of two-dimensional nanomaterials. Despite the theorem mentioned above, from the 1960s speculation that spontaneous spin polarization cannot occur in two-dimensional materials, the Basel physicists were able to demonstrate spin alignment of free electrons in a 2D nanomaterial.

Doctoral students Jonas Roch and Nadine Leisgang made the discovery using graphene. Filling the M0S2 layer with free electrons, they then exposed it to a weak magnetic field. This caused all free electrons to spin in the same direction. They also discovered the spin could be switched in the other direction by merely reversing the magnetic field.

This and other observations of spontaneous spin-polarization demonstrates exciting possibilities for our future understanding and application of nanomaterials.

Spontaneous spin polarization in two dimensional material Physics.org. - 2019. Available at:https://phys.org/news/2019-03-spontaneous-polarization-two-dimensional-material.html

Spontaneous spin polarization demonstrated in a two-dimensional material Nanowerk - 2019. Available at:https://www.nanowerk.com/nanotechnology-news2/newsid=52334.php

Spontaneous spin polarization and spin pumping effect on edges of graphene antidot lattices Physica Status Solidi - 2012. Available at:https://onlinelibrary.wiley.com/doi/abs/10.1002/pssb.201200042

Spontaneous Spin Polarization and Electronic States in Platinum Nano-Particle New trends in superconductivity 2002. Available at:https://www.researchgate.net/publication/300824288_Spontaneous_Spin_Polarization_and_Electronic_States_in_Platinum_Nano-Particle

Spin polarization and quantum spins in Au nanoparticles International Journal of Molecular Sciences - 2013. Available at:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3794745/

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

View original post here:

How Can Spontaneous Spin Polarization be Observed in Different Nanomaterials? - AZoNano

Scientists Measured An Exact Moment In Quantum Time – And It Was Fuzzy – Forbes

The nature of time is one of the most intriguing questions in modern physics and philosophy. One of the reasons why is that the world we observe (often referred to as classical) differs from what really exists on very small scales - the weird, surprising world of quantum physics. Recently, a team of researchers from Sweden, Spain, and Germany designed a clever experiment to watch how quantum systems become classical systems - and in the process, found that time is not as precise as we thought it was.

At its most fundamental level, time is fuzzy.

Lets pretend that your car is acting like an electron. When you look down at your speedometer, instead of having one needle pointing to 40 miles per hour, you would have a highlighted bar letting you know your car was going somewhere between 20 and 60 miles per hour. When you look up, you wouldnt even know what lane you were in.

That would all change when a police officer points her radar gun at you and measures you to be going precisely 40 miles an hour in the right-hand lane.

This is how reality works on very very small scales. Particles like electrons are not in any one position with any one energy. Instead, they are simultaneously in many positions with many energies at once - something called a superposition. This is the case until something - or someone - observes that tiny electron. At that moment, the electron picks a state to be in, illustrating that the observer is a fundamental part of this universe.

The authors of the paper, appearing in Physical Review Letters, wanted to see an electron in the act of making up its mind. They took small snapshots of a strontium ion in an electric field. The electrons within these ions at first were still in their quantum state - and their reality was a smudging of probabilities of various orbital states.

Electrons in orbit around an atom aren't in any one state until they are measured.

The scientists then made their observation by poking the atom with a laser, forcing the electrons to decide what orbit they would occupy.

The scientists found it took the electrons some time to make up their minds.

By taking photographs to see what happens in that one-millionth of a second, scientists saw that the decision, a process referred to as wave function collapse, took some time to happen. Its like when the police officer points her radar gun at you - your car is first going somewhere between 20 and 60 miles an hour, then between 35 and 50, then either 40, 42, or 45, then finally deciding it is going 40 miles an hour.

The intriguing results show that quantum collapse is not instantaneous. It also shows us how time operates on the quantum level - and shows us that time itself may be a blurry, abstract concept. It also shows us that our concept of now may not really exist, and that our reality is a very weird place indeed.

Read the original:

Scientists Measured An Exact Moment In Quantum Time - And It Was Fuzzy - Forbes

Quantum Death Human Cells Carry Quantum Information That Exists as a Soul (Weekend Feature) – The Daily Galaxy –Great Discoveries Channel

Posted on Mar 14, 2020 in Cosmology, Physics, Science

The physical universe that we live in is only our perception and once our physical bodies die, there is an infinite beyond. Some believe that consciousness travels to parallel universes after death. The beyond is an infinite reality that is much bigger which this world is rooted in. In this way, our lives in this plane of existence are encompassed, surrounded, by the afterworld already The body dies but the spiritual quantum field continues. In this way, I am immortal, suggest researchers from the Max Planck Institute for Physics in Munich

The Max Planck physicists are in agreement with British Physicist Sir Roger Penrose who argues that if a person temporarily dies, this quantum information is released from the microtubules and into the universe. However, if they are resuscitated the quantum information is channeled back into the microtubules and that is what sparks a near death experience. If theyre not revived, and the patient dies, its possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.

Steve Paulson writing for Nautil.us describes the 88-year-old Penroses theory as an audaciousand quite possibly crackpottheory about the quantum origins of consciousness. He believes we must go beyond neuroscience and into the mysterious world of quantum mechanics to explain our rich mental life. No one quite knows what to make of this theory, developed with the American anesthesiologist Stuart Hameroff, but conventional wisdom goes something like this: Their theory is almost certainly wrong, but since Penrose is so brilliant (One of the very few people Ive met in my life who, without reservation, I call a genius, physicist Lee Smolin has said), wed be foolish to dismiss their theory out of hand.

While scientists are still in heated debates about what exactly consciousness is, the University of Arizonas Hameroff and Penrose conclude that it is information stored at a quantum level. Penrose agrees he and his team have found evidence that protein-based microtubulesa structural component of human cellscarry quantum information information stored at a sub-atomic level.

It was Hameroffs idea, writes Paulson, that quantum coherence happens in microtubules, protein structures inside the brains neurons. And what are microtubules, you ask? They are tubular structures inside eukaryotic cells (part of the cytoskeleton) that play a role in determining the cells shape, as well as its movements, which includes cell divisionseparation of chromosomes during mitosis. Hameroff suggests that microtubules are the quantum device that Penrose had been looking for in his theory. In neurons, microtubules help control the strength of synaptic connections, and their tube-like shape might protect them from the surrounding noise of the larger neuron. The microtubules symmetry and lattice structure are of particular interest to Penrose. He believes this reeks of something quantum mechanical.

Somehow, our consciousness is the reason the universe is here, Penrose told Paulson during an interview. Theres intelligent lifeor consciousnesssomewhere else in the cosmos, Penrose added. But it may be extremely rare. But if consciousness is the point of this whole shebang, wouldnt you expect to find some evidence of it beyond Earth Paulson asked? Well, Im not so sure our own universe is that favorably disposed toward consciousness, Penrose replied.

In Beyond Biocentrism: Rethinking Time, Space, Consciousness, and the Illusion of Death, Robert Lanza asks does the soul exist? The new scientific theory he propounds says were immortal and exist outside of time. Biocentrism postulates that space and time are not the hard objects we think. Death does not exist in a timeless, spaceless world. His new scientific theory suggests that death is not the terminal event we think.

There are an infinite number of universes, and everything that could possibly happen occurs in some universe. Death does not exist in any real sense in these scenarios. All possible universes exist simultaneously, regardless of what happens in any of them. Although individual bodies are destined to self-destruct, the alive feelingthe Who am I?- is just a 20-watt fountain of energy operating in the brain. But this energy doesnt go away at death. One of the surest axioms of science is that energy never dies; it can neither be created nor destroyed. But does this energy transcend from one world to the other?

The Daily Galaxy, Max Goldberg, via Nautil.us, Robert Lanza and Sunday Guardian Live

Image credit: via Pixabay

The rest is here:

Quantum Death Human Cells Carry Quantum Information That Exists as a Soul (Weekend Feature) - The Daily Galaxy --Great Discoveries Channel

The Mother of Invention (March 9, 2005) – Anderson Valley

I dont know if were more religious today, says Ken Bingman, who has taught biology in Kansas City public schools for 42 years, but I see more and more students who want a link to God.

While religion certainly looks to be on the upswing in the United States, theres a lot more to the resurgence of creationism than a rising tide of religious fervor. Received wisdom counsels little more than continued resistance against the Bible thumpers at the gates. Daniel Dennett, author of Darwins Dangerous Idea, is too busy excoriating creationists and scientific fellow-travelers to notice that the dominant biological theory of the day is inadvertently encouraging the creationist revival. The chief threat to Darwinian evolution is none other than neo-Darwinian evolution. As conceived by Austrian theorist August Weismann in the 1890s, neo-Darwinism shares fundamental features with creationism, not the least of which is reliance on blind faith rather than empirical fact. The creationist tide may never be stemmed until biology abandons Weismannian reductionism and returns to a more traditional Darwinian outlook.

Given the cultural atmosphere of his upbringing, Darwin could hardly have helped but absorb the lesson that an all-knowing, masculine deity commands the cosmos. At a time when science was still joined at the hip with religion, the modern prophets were Kepler, Galileo, Descartes, and Newton, whose mastery of mathematics gave them a communion of sorts with the Almighty, allowing them to receive the eternal equations supervising the operations of the universe. Newtons laws of motion were no less than Gods thoughts.

As above, so below. After establishing the heavens, the cosmic Mechanic fashioned each species of life according to a design of His choosing. According to theologian William Paley whose treatise, Natural Theology, had young Darwin temporarily hypnotized an organism is no different in principle than a watch. Just as a watch cannot come into being without the painstaking efforts of a craftsman, organisms are mechanisms constructed and wound up by God and left to play out their allotted time on Earth.

But Darwin was a true naturalist. Guided by his intuitive sense of nature, he gradually outgrew Paleys notion of divine authority over obedient matter. The naturalistic materialism of his mature years represented a total repudiation of theological mechanism, substituting divine creation with the creativity inherent in nature. His new understanding was prefigured in part by the deist teachings of his grandfather, Erasmus, who alleged that after devising the cosmic machine, the deity left the mundane affairs of terrestrial existence to their own devices.

Erasmus had a streak of the pagan in him. Though outwardly a scientific rationalist, he found religion in nature if not the Bible. Exploring a cave, he didnt just find a bunch of rocks but glimpsed the Goddess of Minerals naked, as she lay in her inmost bower. The earth wasnt just a passive depository for Gods will but Mother Earth, whose womb gave life and whose wrath in the form of floods, eruptions, and quakes could just as easily snuff it out. In contrast to the paternal principle of the heavens, the earth followed its own, darker, maternal principle.

Materialism is not so much a sophisticated modern philosophy as an ancient mythos that locates within the earth itself the source of life and its myriad forms. Etymologically, mother and matter are the same word, having evolved from the same Indo-European root. The materialist metaphysics signified by Mother Nature is not the reductionistic form were accustomed to today, in which particles are mere playthings of eternal laws of physics, but an expansive materialism in which matter is endowed with its own creative and destructive powers.

In response to the 1859 debut of Darwins theory of natural selection, Adam Sedgwick, an old-school geologist, accused the author of trying to render humanity independent of a Creator by breaking the chains that link all secondary causes to Gods ultimate cause. Though Darwins declaration of independence was a prerequisite to the scientific study of life, he was understandably anxious about turning his back on the Father. Caught in the pull of two opposing worldviews, he conceded that his theological opinions were hopelessly muddled. The same could be said of his views on physics and life.

The starting-point for the theory of descent with modification is not the equations of Kepler, Galileo, and Newton but the fecundity of living nature and the resulting struggle for existence in the face of finite resources. Though Darwin invoked the authority of natural law so as to eliminate the role of divine intervention in the creation of species, at the core of evolution is novelty, and by definition novelty is not pre-determined, either by God or physics. While pledging allegiance to Newton as final arbiter of everything under the sun, he set out on a course that would ultimately undermine physical determinism in biology.

Throw up a handful of feathers, he says in The Origin of Species, and all fall to the ground according to definite laws; but how simple is the problem where each shall fall compared to that of the action and reaction of the innumerable plants and animals which have determined, in the course of centuries, the proportional numbers and kinds of trees now growing on [Native American] ruins! The mathematical abstractions of physics had little to offer when it came to either ecology or the internal dynamics of organisms. Dissenting from T. H. Huxleys notion of animal automatism, Darwin stressed the importance of individual will in shaping behavior and maintained that a complex system of cells, tissues, and organs cant function properly without a coordinating power that brings the parts into harmony with each other. Such talk has no place in a purely mechanistic program.

While today evolution is generally thought to result from the purely mechanical interplay of natural selection and genetic mutation, Darwin explicitly rejected this view, assigning only a marginal role to the spontaneous variations (mutations) arising from the germ-plasm (genome). The variations subject to natural selection did not emerge from the germ-plasm buried deep within the organisms cells but from its day-to-day struggle to survive in the face of competition and limited resources. Darwinian evolution is a model of clarity, elegance, and common sense: the adaptations made by organisms are transmitted to their progeny, and these adaptations become more ingrained and more pronounced with each passing generation until a new species emerges from the old.

Ordinarily written off as Lamarckian, this view is incidental to Lamarcks theory, according to which evolution, from the beginning, was divinely guided toward the emergence of Homo sapiens. As for the capacity of plants and animals to inherit traits taken up by their ancestors during their life-struggles, Darwin concurred. I think there can be no doubt that use in our domestic animals has strengthened and enlarged certain parts, and disuse diminished them; and that such modifications are inherited. He cited examples of animals that clearly inherited traits from their ancestors, such as young shepherd dogs that know, without training, to avoid running at sheep. He explained that domesticated chickens have no fear of cats or dogs because their ancestors became accustomed to common pets and lost their fear of them. Ostriches cant fly because they inherited weak wing muscles and strong legs from their ancestors who learned to kick their enemies instead of taking flight. A similar effect in ducks may be safely attributed to the domestic duck flying much less, and walking more, than its wild parents.

Darwin was skeptical of the notion that examples such as these and there are literally countless more could all result from genetic mutation. Why attribute a given trait to a mysterious and random process taking place in the depths of the body when theres a perfectly obvious explanation involving the life-circumstances of ancestors? Everyone knows that hard work thickens the epidermis on the hands; and when we hear that with infants, long before birth, the epidermis is thicker on the palms and the soles of the feet than on any other part of the body we are naturally inclined to attribute this to the inherited effects of long-continued use or pressure.

The meaning of evolution is that species are not created so much as self-created in the act of living and adapting. Regarding the origin of sea mammals, Darwin writes, A strictly terrestrial animal, by occasionally hunting for food in shallow water, then in streams or lakes, might at last be converted into an animal so thoroughly aquatic as to brave the open ocean. Due to the variability of bone structure in youth, newly-acquired behaviors can gradually result in structural modifications, such as flat-fish that pushed their eye sockets a little further up their skulls with each passing generation. The tendency to distortion would no doubt be increased through the principle of inheritance.

The key is that offspring inherit adaptations at the same age or younger than the age at which their parents originally made the adaptation. The alternative that such changes result only from random genetic mutations fails to explain the changes but merely surrenders the issue to chance. Rather than account for the fact that camels, which often have to kneel on sandy terrain, begin developing padded tissue on their knees while still in the womb, we simply say that it happened by chance, and this explanation repeats for every species on Earth in regard to any trait that might otherwise be attributed to the living adaptations of organisms in their struggle to survive.

Finding this prospect intolerable, Darwin insisted on the centrality of the inheritance of adaptations, emphasizing that the young play a central role in this process. For if each part [of the body] is liable to individual variations at all ages, and the variations tend to be inherited at a corresponding or earlier age, propositions which cannot be disputed, then the instincts and structure of the young could be slowly modified as surely as those of the adult; and both cases must stand or fall together with the whole theory of natural selection. The primary source of variations to be selected or rejected is the will of the organism to survive and reproduce.

But what if Darwin was wrong? He certainly stumbled with his fanciful theory of pangenesis, whereby each cell sloughs off tiny gemmules that reflect changes occurring in the body and transmit those changes to the reproductive organs. Pangenesis was intended to provide a mechanism enabling adaptations to be passed along to the next generation. According to Neal Gillespie, Darwins theory assured him that a capricious deity could be excluded from the process of heredity as well as from speciation. Unfortunately, another capricious deity, DNA, eventually took its place.

August Weismann was absolutely correct when he concluded that organisms cannot affect the determinants (genes) in their reproductive cells. If genes are the sole vehicle of hereditary information, as Weismann assumed, then acquired characteristics cannot be inherited, and Darwinian evolution, with its typically English sentimentalism, must yield to a more precise, mechanistic form.

But Weismann was very clear that his theory was not based on evidence and could never be tested. Cutting off the tails of mice and finding that their offspring still had tails proved nothing, as Weismann himself readily admitted. Though he claimed his argument was ironclad, he offered nothing to support it beyond the fact that he simply couldnt imagine how hereditary information could be transferred by any means other than the passage of genes from parents to offspring. We accept it, not because we are able to demonstrate the process in detail but simply because we must, because it is the only possible explanation that we can conceive. As neo-Darwinist Richard Dawkins likes to point out, the inability of creationists to imagine how the species of life could have emerged without Gods help does not make creationism a scientific theory. What he fails to realize is that his argument applies with equal force to his own favored view.

Darwinian evolution can be expressed as a form of local creationism. Rather than products of a universal creator, species are shaped by their pragmatic adjustments to local environments. Thus, by emphasizing that evolution boils down to the purely mechanical interaction of genes and environment, neo-Darwinism reverses Darwin's innovation and restores the creation of species to universal causes. Whether theological or mathematical, mechanistic determinism is universal creationism.

As Darwin observed on the Pacific islands, its no accident that frogs, which cant survive seawater, are found only on the islands where they evolved, whereas birds, which can fly from one island to the next, are found everywhere. When confronted with this fact, a creationist might say, It pleased the Creator to place those frogs on some islands and not others. Of course, this fails to explain the situation but merely restates the facts. Similarly, the neo-Darwinian reliance on genetic mutation as the source of heritable variations merely restates the fact that a transformation has taken place and that it has become biologically ingrained within the species.

Neo-Darwinsim shares many features with creationism. First, it is faith-based and untestable. It simply must be true. Second, it is universalist: the source of species is not local conditions and creative adaptations but transcendent principles that merely manifest locally. Third, like the exhortation that God did it, neo-Darwinism makes use of a generic, all-purpose explanation instead of tailoring its account to particular situations faced by particular organisms. Fourth, it is anthropomorphic. In place of a human-like God, a human-like language or code inscribed in DNA is responsible for shaping organisms. Fifth, it is mechanistic: we are machines assembled according to a blueprint or design. Whether this design is a soul crafted by God or a genome forged beneath the blind forces of mutation and natural selection, the body is a mechanism constructed from specifications of one sort or another. Finally, as with creationism, the power of speciation is appropriated from the species themselves and refashioned as an external, mechanical process.

The shift from Darwinism to neo-Darwinism is pure atavism, a reversion to the transcendent determinism previously found only in creationist dogma. The law-giver may have been airbrushed out, but the law remains. Trouble is, with its cosmic Mechanic, creationism is clearly the strong form of mechanism, while neo-Darwinism though obviously much closer to the truth must remain the weak form. In the struggle between intelligent design and blind design, is it any wonder that creationism has proved so resilient?

Since the 1972 publication of Jacques Monods Chance and Necessity, the mechanistic theory of life has been known as reductionism. But what, precisely, is life being reduced to? Though often mistaken for the monistic doctrine of materialism, reductionism is a dualistic theory that reduces life not to matter but to physics. We have, on the one hand, the passive material constituents of the organism; on the other, the laws of physics that provide order and necessity to the otherwise chance motions of atoms and molecules.

According to Stephen Rothman, a professor at UC San Francisco and an experimental biologist for 40 years, reductionistic bias has severely impaired the ability of researchers to accurately assess the operations of cells and bodies. Rothman offers the vesicle theory of protein transport as an example of the reductionistic approach at work. The vesicle theory is stupendously unwieldy and implausible, requiring 15 to 30 mechanisms to move proteins a few microns. None of the experiments cited in support of the theory can prove that these mechanisms actually exist but only what they would look like if they did. Proponents have never put their theory to the test, never saying, If the theory is true, then such and such should happen. Yet they remain implacably confident in themselves. Why? Because their supposition is the only way to account for the movement of protein on the view that cellular activities are completely lost without the guidance of physical and chemical principles.

Since preparation of cell samples for viewing in electron microscopes inevitably distorts the final image, some proteins appear where theyre supposed to be, while others are phantoms. The resulting confusion allowsreductionist researchers to interpret all experimental results in their favor. Thus, if a protein appears where the vesicle theory predicts, its assumed to be in the correct place, and if not, its simply written off as a contaminant. As to predicted proteins that dont show up at all, these are assumed to have been lost in the sample preparation process.

Much like the automobile a soothingly familiar mechanism in our daily lives a vesicle is supposed to open up to allow proteins to enter it, then shut tight during transport and re-open upon reaching its destination. In the 60s, when Rothman demonstrated that proteins can freely enter and exit a vesicle even when its shut, most of his colleagues assumed his finding was flawed due to errors in sample preparation. In the 80s, when the brand new x-ray microscope proved him right, Rothman figured the vesicle proponents would admit their mistake. Hes still waiting. It seems that no amount of evidence, no matter how compelling, can falsify the vesicle theory.

A self-proclaimed biological skeptic, Rothman is not the first to call into question the final authority of physics over biology. Ernst Mayr noted that the property of individuality, which is utterly foreign to atomic physics and chemistry, places biology beyond the grasp of physical analysis. Though the late Mayr helped bring neo-Darwinian theory to fruition in the 30s and 40s with the modern synthesis of natural selection and Mendelian genetics, he was dismissive of efforts at physical reductionism. Attempts to reduce biological systems to the level of simple physico-chemical processes have failed because during the reduction the systems lost their specifically biological properties.

According to Niels Bohr the first of the quantum generation to investigate the potential for a physics of life a rigorous analysis of a cell would require knowing the initial values and positions of its constituent particles. Since measuring these particles disturbs them by breaking or dislocating bonds between them, its impossible to measure precisely the parts of a cell without altering it. Bohr compared this conundrum to his prior discovery that the momentum of an electron cannot be established once its position has been determined, and vice versa. Bohr called this complementarity, a principle he generalized to encompass all sufficiently complex systems, including cells and organisms. The more precisely we describe the parts, the cloudier the system as a whole becomes. Just as the quantum realm requires its own set of principles apart from classical physics, life, he concluded, is a primary phenomenon not subject to prior forms of analysis.

In 1944, the same year DNA was identified as the carrier of genes, Erwin Schrodinger published a short book called What Is Life? Taking a somewhat rosier view than his Danish colleague, Schrodinger proclaimed that the inability of current physics to account for life is no reason to doubt the eventual success of the project. The only catch is that a successful resolution will depend on other laws of physics hitherto unknown. We have no idea what these laws might be or how to find them. All we know for sure, said Schrodinger, is that the ordering of living matter is entirely different from the physical processes described by statistical mechanics. Despite imploring the reader not to accuse him of calling genes cogs of the organic machine, Schrodinger is commonly cited to this day as a physicist who lent support to reductionistic biology.

To date, the most sustained, in-depth examination of biology by a physicist was carried out by Walter Elsasser, another pioneer in quantum mechanics who later turned to geophysics and proposed against great opposition what eventually became the definitive theory of the earths electromagnetic field. Intrigued by the challenge of explaining organisms from a physical standpoint, Elsasser approached the issue in terms of precise point to point predictability of every step in a reaction chain that is both necessary and sufficient for a particular biological outcome. Yet this method, he discovered, has no applicability to organisms.

Quantum mechanics, the foundation of modern physics and the most thoroughly tested and successful theory of all time, is a statistical science, explaining the behavior of particles en masse rather than one quark at a time. What makes quantum mechanics a viable undertaking is that every particle of a given class is identical to every other particle of that class. As long as every proton is identical to every other proton, and every electron is identical to every other electron, etc., the averages obtained for a given class apply equally to every member within it.

By contrast, life is characterized by individuality, or radical heterogeneity, in Elsassers phrase. Macromolecules, organelles, cells, tissues, organs and organisms are never identical to other members of their class (not even in the case of identical twins). We are individualized right down to the chemistry of our blood and saliva. As a result, when it comes to living matter, averages dont apply equally to all members of a given class. Individuality short-circuits the statistical methods of quantum physics, rendering inoperative the differential equations that determine ordinary physical processes. Physics is simply not equipped to bridge the gap between the homogeneous safety of atoms and the heterogeneous stew of organisms.

As we learn from Ludwig von Boltzmann and the science of thermodynamics, physics can predict the motions of a cloud of gas taken as a whole but not the particles comprising it. So too, the interior of a cell consists primarily of free particles not subject to deterministic equations. The orderly processes that take place within cells are set against a backdrop of atomic and molecular randomness. With a trillion atoms per cell, many of them multi-bonding carbon, the number of possible molecular states compatible with the shapes and functions of a cell is far too great to yield to the yoke of mathematical physics. Though the patterned regularities of cells can be described in great detail, the ultimate origins of these processes are buried in unfathomable complexity. Elsasser declared biology a non-reductionistic science, fundamentally and qualitatively different from physical science.

Even if life really is reducible to physical principles, biological reductionism can be neither verified nor falsified and is thus not a theory in the scientific sense. Perhaps life emerged when God exhaled onto a lump of clay, but this too can never be proven or disproven.

Rather than accept that physicalist biology has no scientific meaning, reductionists settled on a jerry-rigged substitute theory based around genes. That life is a product of physics is taken on faith while the multi-level ordering of the organism is attributed to DNA, which is charged with the dual task of storing morphological information and coordinating (via RNA and protein) development from egg to adult. In place of true physical reductionism, we have a stop-gap genetic reductionism. Yet even the watered down physics of life is untenable.

By utilizing the mathematics of combinatorics, UC Berkeley biologist Harry Rubin has demonstrated that the precise combination of genes required for the mold Aspergillus to produce penicillin is transcalculational, or beyond the computational capacity of any conceivable computer in a finite amount of time. With 1000 genes influencing penicillin production and each gene having, at the very least, alternate wild and mutant states the minimum number of possible gene combinations is 2 to the 1000th power, or 10 to the 300th power. The magnitude of this number can be appreciated when we consider there are only 10 to the 80th particles in the universe. Yet the production of penicillin is a model of simplicity compared to the generation of the eye in the fruit fly Drosophila, which involves 10,000 genes. With two copies of each gene and multiple types of mutation for each, the number of possible combinations grows beyond our imaginative capacity. If organic structures really are built mechanically from genetic instructions, then genes must possess a magical power of computation.

The Boltzmann theorem, which limits deterministic equations to statistical aggregates of molecular events, poses an insurmountable problem for genetic reductionism. Whether in a gas cloud or a living cell, a free molecules behavior is always unique and nonrecurrent. Between the genes in the nucleus and the tissues and organs they allegedly determine lies an ocean of chaos called the cytoplasm. Deterministic processes, such as enzyme-driven reactions, are like rafts tossed about on giant waves in the vast cytoplasmic outback, every causal chain bound by a terminal point beyond which nothing can be predicted. Even if genes could miraculously express their inner blueprint, this information would quickly be swamped by the molecular pandemonium. In contrast to computers, which are designed so as to maintain an acceptable signal to noise ratio, organisms have no means of insulating against noise, particularly inside cells.

Oddly enough, instead of compounding the underlying error of physical reductionism, the error of genetic reductionism seems to cancel it out. Under the spell of DNA and its four nucleotide letters, we cant see that the ground has dropped out from beneath our feet, leaving neither reduction of organism to genome nor reduction of cell to physics. The endless stream of wordlets formed from the combinations of a, c, g, and t c is a kind of incantation that keeps the mind frozen in reverential awe at the keepers of the keys and their magic code. The Human Genome Project, intended to explain the mystery of life, merely completed the catechism.

This is not to deny the numerous effects that genes have on organisms. But the fact that genes distinguish one individual from another means only that they influence development not that they necessarily program and determine it every step of the way. That the gene-protein complex is necessary for the formation of organs and tissues doesnt mean its sufficient. As embryologist Paul Weiss observed, its a long way from determining eye color to actually building a pair of eyes. If genes determine multicellular structures, then why, asked Weiss, does embryogenesis begin indeterminately, differing from case to case, as if each embryo must improvise as it goes along? And why does organic form emerge top-down? Only when the body as a whole begins taking shape do the outlines of its organs emerge, and only then do cells begin conforming to characteristic types exactly the opposite of what we would expect from a process driven from within the dark recesses of our cells. As to DNA replication and other mechanical operations within organisms, Weiss contended that rather than controlling the living system, organic mechanisms are tools utilized by the system in its quest to maintain large-scale order in the face of small-scale disorder.

This is what Darwin was getting at with his coordinating power. The organism operates holistically, much like a magnetic field. It also adapts holistically. As Rothman points out, adaptive qualities belong to organisms, not genes. Its the organism as a whole that struggles to survive in the jungle or savanna, not genes tucked away in their cozy nuclear compartments. The question is not whether creatures pass on their living adaptations but how.

Toward the end of The Origin of Species, Darwin takes Leibniz to task for alleging that Newton introduced occult qualities and miracles into philosophy with his theory of gravity. As with Faradays undulatory theory of light, which Darwin cites as a fine example of scientific detective-work, Newtons theory of gravity suggests that matter possesses unexpected properties that do not conform to our standard notion of matter, i.e., contact mechanics. Weve known since Einstein that electromagnetism and gravity both allow matter to act at a distance without material mediation. Elsasser suggests that an unforeseen property of matter enables organisms to receive hereditary information from their ancestors at a distance over time. He calls this holistic memory, as opposed to the artificial, information-storage memory in computers.

The physics to which biology reduces itself is not the modern discipline of Einstein and quantum mechanics but the discredited variety that saw contact mechanics as the fundamental reality. Biologists today resemble theorists of the 19th century who still believed in a luminiferous ther that mediated the propagation of electromagnetic waves through space. As physicist James Croll averred in 1867, No principle will ever be generally received that stands in opposition to the old adage, A thing cannot act where it is not, any more than it would were it to stand in opposition to that other adage, A thing can not act before it is or when it is not. Having recognized that matter does indeed act where it is not, Elsasser began to wonder if it could also act when it is not.

Apart from allowing the transmission of acquired characteristics, holistic memory disposes of the need for a blueprint. Instead of following a pre-planned design, the embryo merely mimicks the developmental steps of its predecessors. If all they must do is combine as they always have in a given situation, genes have no need for magical powers of computation. But this does not mean the behavior of organisms is reducible to a new kind of physical determinism based on holistic memory in place of contact mechanics. Between the randomness of molecular events and the necessity of physical law lies a probabilistic gray area in which an organism may choose to follow its memory or if environmental conditions have changed sufficiently to select a new course of action. By contrast, if every creature is deterministically bound to its species memory, all the genetic mutations in the world cannot give rise to evolution. Elsassers organismic selection is the logical counterpart to Darwins natural selection.

Which option should cause us greater skepticism that a human being is a robot constructed through blind forces of nature and operated by remote-control from the nuclei of its cells or that once again matter turns out to be more versatile than wed previously imagined? Which is more plausible that the memory of how to grow from an egg to an inconceivably complex living system is somehow encoded in our genes or that nature has its own form of memory?

Darwins theory of evolution is true to life precisely because it shifts the focus from the timeless abstractions of physics to the irreducible powers of creativity and destruction that play out day by day in the natural world. As he wrote in the famous final passage of Origin, There is grandeur in this view of life, with its several powers such as growth, reproduction, variability, the will to live, and natural selection. Though he (tentatively) believed in a Creator who set it all in motion according to fixed, universal laws, in order to comprehend the ever-changing face of life, Darwin turned to Mother Nature. Instead of attaching biology to physics and thereby subsuming it to the Fathers mathematical idealism, he brought biology to life by animating it with a materialistic theory all its own.

As he observed in a letter to his friend, geologist Charles Lyell, it is absolutely necessary to go the whole vast length, or stick to the creation of each separate species. Its about time the Darwinian revolution was completed. Contrary to Weismann, not only can we conceive of alternatives to reductionism, but we have no choice, as the ghost of mechanism past will continue to haunt us until we reject mechanistic biology in all its forms.

View original post here:

The Mother of Invention (March 9, 2005) - Anderson Valley

Researchers Develop a Machine Capable of Solving Complex Problems in Theoretical Physics – SciTechDaily

Over the last few decades, machine learning has revolutionized many sectors of society, with machines learning to drive cars, identify tumors and play chess often surpassing their human counterparts.

Now, a team of scientists based at the Okinawa Institute of Science and Technology Graduate University (OIST), the University of Munich and the CNRS at the University of Bordeaux have shown that machines can also beat theoretical physicists at their own game, solving complex problems just as accurately as scientists, but considerably faster.

In the study, published in Physical Review B, a machine learned to identify unusual magnetic phases in a model of pyrochlore a naturally-occurring mineral with a tetrahedral lattice structure. Remarkably, when using the machine, solving the problem took only a few weeks, whereas previously the OIST scientists needed six years.

The pyrochlore crystal structure contains magnetic atoms, which are arranged to form a lattice of tetrahedral shapes, joined at each corner. Credit: Theory of Quantum Matter Unit, OIST

This feels like a really significant step, said Professor Nic Shannon, who leads the Theory of Quantum Matter (TQM) Unit at OIST. Computers are now able to carry out science in a very meaningful way and tackle problems that have long frustrated scientists.

The Source of Frustration

In all magnets, every atom is associated with a tiny magnetic moment also known as spin. In conventional magnets, like the ones that stick to fridges, all the spins are ordered so that they point in the same direction, resulting in a strong magnetic field. This order is like the way atoms order in a solid material.

But just as matter can exist in different phases solid, liquid and gas so too can magnetic substances. The TQM unit is interested in more unusual magnetic phases called spin liquids, which could have uses in quantum computation. In spin liquids, there are competing, or frustrated interactions between the spins, so instead of ordering, the spins continuously fluctuate in direction similar to the disorder seen in liquid phases of matter.

Previously, the TQM unit set out to establish which different types of spin liquid could exist in frustrated pyrochlore magnets. They constructed a phase diagram, which showed how different phases could occur when the spins interacted in different ways as the temperature changed, with their findings published in Physical Review X in 2017.

The phase diagram produced by the Theory of Quantum Mater unit at OIST, showing all the different magnetic phases that exist in the simplest model on a pyrochlore lattice. Phase III, VI and V are spin liquids. Credit: Image reproduced with permission of the American Physical Society from Phys. Rev. X, 2017, 7, 041057

But piecing together the phase diagram and identifying the rules governing the interactions between spins in each phase was an arduous process.

These magnets are quite literally frustrating, joked Prof. Shannon. Even the simplest model on a pyrochlore lattice took our team years to solve.

Enter the machines

With increasing advances in machine learning, the TQM unit were curious as to whether machines could solve such a complex problem.

To be honest, I was fairly sure that the machine would fail, said Prof. Shannon. This is the first time Ive been shocked by a result Ive been surprised, Ive been happy, but never shocked.

The OIST scientists teamed up with machine learning experts from the University of Munich, led by Professor Lode Pollet, who had developed a tensorial kernel a way of representing spin configurations in a computer. The scientists used the tensorial kernel to equip a support vector machine, which is able to categorize complex data into different groups.

The advantage of this type of machine is that unlike other support vector machines, it doesnt require any prior training and it isnt a black box the results can be interpreted. The data are not only classified into groups; you can also interrogate the machine to see how it made its final decision and learn about the distinct properties of each group, said Dr Ludovic Jaubert, a CNRS researcher at the University of Bordeaux.

The phase diagram reproduced by the machine. For comparison, the phase boundaries previously determined by the scientists without the machine have been drawn over the top. Credit: Image reproduced with permission of the American Physical Society from Phys. Rev. B, 2019, 100, 174408

The Munich scientists fed the machine a quarter of a million spin configurations generated by the OIST supercomputer simulations of the pyrochlore model. Without any information about which phases were present, the machine successfully managed to reproduce an identical version of the phase diagram.

Importantly, when the scientists deciphered the decision function which the machine had constructed to classify different types of spin liquid, they found that the computer had also independently figured out the exact mathematical equations that exemplified each phase with the whole process taking a matter of weeks.

Most of this time was human time, so further speed ups are still possible, said Prof. Pollet. Based on what we now know, the machine could solve the problem in a day.

We are thrilled by the success of the machine, which could have huge implications for theoretical physics, added Prof. Shannon. The next step will be to give the machine an even more difficult problem, that humans havent managed to solve yet, and see whether the machine can do better.

References:

Identification of emergent constraints and hidden order in frustrated magnets using tensorial kernel methods of machine learning by Jonas Greitemann, Ke Liu ( ), Ludovic D. C. Jaubert, Han Yan (), Nic Shannon and Lode Pollet, 5 November 2019, Physical Review B.DOI: 10.1103/PhysRevB.100.174408

Competing Spin Liquids and Hidden Spin-Nematic Order in Spin Ice with Frustrated Transverse Exchange by Mathieu Taillefumier, Owen Benton, Han Yan, L.D.C. Jaubert and Nic Shannon, 6 December 2017, Physical Review B.DOI: 10.1103/PhysRevX.7.041057

See the original post here:

Researchers Develop a Machine Capable of Solving Complex Problems in Theoretical Physics - SciTechDaily

Parallel University: The Big Ten Tournament – The Only Colors

In the world of quantum physics, there is a theory that all possible outcomes of any measurement or event actually takes place in at least one universe. Every time a coin is flipped, in one universe it comes up heads, and it another newly created universe, it comes up tails. As a result, if this interpretation of quantum mechanics is true, an infinite number of universes exist where all possible outcomes actually take place.

In our particular universe, life kinda sucks right now. A deadly pandemic virus is sweeping the globe, and as a result, one of our most beloved sports traditions, the NCAA Mens Basketball Tournament, has been cancelled. For me, this feel like Christmas got cancelled, and we were forced to take all of our presents into the front yard and set them on fire.

But, what if there is a universe out there where there is no COVID-19 virus? What if there is a universe where we are on the eve of Selection Sunday? Wouldnt it be fun just to take a peak into that universe to see how things are going? Well, maybe we can.

Throughout this season, I have given a running tally of the number of expected wins and the odds to win the Big Ten regular season title. I accomplished this, in part, by taking Kenpom adjusted efficiencies, converting them into point spread for each game, converting those spreads into win probabilities, and then using those probabilities to run a 120,000 Monte Carlo simulations of the remaining Big Ten season. I used a similar methodology just the other day to project the results of the now cancelled Big Ten Tournament.

If the many-worlds interpretation of quantum mechanics is correct, then each of those simulations represents what actually happened in some parallel universe out there. So, why not use the same method to see what one possible, but mathematically consistent, Big Ten Tournament would have looked like? Well, as it turns out, I did just that, and I am happy to share the results with you now.

No. 9-Michigan defeats No. 8-Rutgers: 76-61

No. 12-Minnesota defeats No. 5-Iowa: 74-64

No. 7-Ohio State defeats No. 10-Purdue: 70-61

No. 11-Indiana defeats No. 6-Penn State: 76-64

Michigan was favored over Rutgers, so that result is not a surprise, but Iowa and Penn State stumbled out of the blocks. This all but certainly guarantees Indiana a parallel universe bid to the Big Dance, but will it hurt Penn State and Iowas seed? Also, Ohio State survived a late charge from Purdue and now faces MSU for the second time in under a week.

No. 9-Michigan defeats No. 1-Wisconsin: 70-64

No. 4-Illinois defeats No. 12-Minnesota: 77-57

No. 2-Michigan State defeats No. 7-Ohio State: 70-66

No. 3-Maryland defeats No. 11-Indiana: 77-59

Vegas had Michigan and Wisconsin as a pickem, and the Badgers were due for a tough game. At the end of the day, their luck ran out. The same can be said for Minnesota and Indiana, who ran out of gas and get boat-raced. Ohio State showed more fight against MSU in the rematch, but in the end, Cash and Tillman were just too much. MSU survives and advances to the semis to face Maryland, who cruised against the Hoosiers.

No. 9-Michigan defeats No. 4- Illinois: 77-59

No. 2-Michigan State defeats No. 3-Maryland: 73-64

While Michigan was the lower seed, they were a two-point favorite over the Illini and managed to crush Illinois due to hot shooting from the outside. Meanwhile, MSU led the Terrapins wire-to-wire, and were buoyed by a 15-point outing from Gabe Brown.

This sets up a rematch of the 2019 & 2014 Final, both of which were won by the Spartans. Can MSU make it three in a row? (I have a very good feeling about it.) Either way, we will have to wait until just before the brackets are announced on Sunday evening in the parallel universe to find out.

Meanwhile, elsewhere in the parallel universe, other conference tournament were going on as well. The parallel universe selection committee was watching these games very closely. I will now give a quick summary of how those came out:

(Note: I did not actually simulate the results of this final batch of tournaments. For seeding purposes, I will lean heavily on the aggregated results from the bracket matrix project. Even I have limits...)

In the parallel universe, Selection Sunday is still on schedule. Stay tuned on Sunday night to see where MSU winds up, and how mad at the committee we should or should not be.

Read more:

Parallel University: The Big Ten Tournament - The Only Colors

Does The Quantum Realm Really Exist? – World Atlas

In physics, the quantum realm represents the realm where the scale of quantum mechanics becomes crucial if we study it as an isolated system. It seems complicated, right? And it is, quantum physics can be extremely complicated. It is also hard to explain in more simple terms, but we can try to answer this question in a way that is as simple as possible.

The term quantum realm gained enormous popularity among people that are generally not interested in physics once it appeared in movies from the popular Marvelcomic bookfranchise. Everyone started talking about the quantum realm and wondering if it was real. And the answer is not simple.

The answer to the question of whether the quantum realm exists can be both yes and no. It depends on what we consider the quantum realm to be. A realm that exists separate from our reality, that we would call the quantum realm does not exist. At least not according to everythingphysicshas learned about the universe so far.

However, our reality is based on quantum mechanics, which is a unique set of rules that are applied on a microscopic level. When physicists say quantum realm, they are talking about the situations where these rules are more apparent.

Quantum physics is often considered the branch of physics that deals with the understanding of our universe at themicroscopic level. However, this is also a misunderstanding of sorts. Quantum physics can be most accurately described as the foundation of all physics. It is a theory of knowledge itself; it allows physicists to ask questions that reach far beyond what they were able to ask before.

It allows us to see the true form of our reality and leads to some exciting advances in science. It can also give us insight into things we did not think were possible before, like quantum entanglement or particles being in two places at the same time.

Studying quantum physics allows us to understand how particles behave. We can learn about the way quantum particles communicate with each other over vast distances. Things like this happen at a microscopic level and fall into the quantum realm. However, even if they mostly occur on that microscopic, nanometer scale, they can also operate on a broader level.

The examples of quantum physics on a regular-sized scale are the double-slit experiment and electron tunneling. One other very well known example is theSchrdinger's cat thought experiment, which is a paradox that operates under the rules of the quantum realm.

So while yes, the quantum realm does exist, it probably is not what you imagined it to be. It is not some magical space that exists separate from our reality and where magical things can happen. It exists here, in our world, and it is a complex subject whose new mysteries are continually being discovered by physicists.

It deals with events that happen on an extremelysmall scalebut can help us better understand the secrets of our entire universe. It is a fascinating subject that may be hard to understand, but is incredibly significant and may hold the key for a large number of future discoveries.

Originally posted here:

Does The Quantum Realm Really Exist? - World Atlas

What is quantum cognition? Physics theory could predict human behavior. – Livescience.com

The same fundamental platform that allows Schrdinger's cat to be both alive and dead, and also means two particles can "speak to each other" even across a galaxy's distance, could help to explain perhaps the most mysterious phenomena: human behavior.

Quantum physics and human psychology may seem completely unrelated, but some scientists think the two fields overlap in interesting ways. Both disciplines attempt to predict how unruly systems might behave in the future. The difference is that one field aims to understand the fundamental nature of physical particles, while the other attempts to explain human nature along with its inherent fallacies.

"Cognitive scientists found that there are many 'irrational' human behaviors," Xiaochu Zhang, a biophysicist and neuroscientist at the University of Science and Technology of China in Hefei, told Live Science in an email. Classical theories of decision-making attempt to predict what choice a person will make given certain parameters, but fallible humans don't always behave as expected. Recent research suggests that these lapses in logic "can be well explained by quantum probability theory," Zhang said.

Related: Twisted Physics: 7 Mind-Blowing Findings

Zhang stands among the proponents of so-called quantum cognition. In a new study published Jan. 20 in the journal Nature Human Behavior, he and his colleagues investigated how concepts borrowed from quantum mechanics can help psychologists better predict human decision-making. While recording what decisions people made on a well-known psychology task, the team also monitored the participants' brain activity. The scans highlighted specific brain regions that may be involved in quantum-like thought processes.

The study is "the first to support the idea of quantum cognition at the neural level," Zhang said.

Cool now what does that really mean?

Quantum mechanics describes the behavior of the tiny particles that make up all matter in the universe, namely atoms and their subatomic components. One central tenet of the theory suggests a great deal of uncertainty in this world of the very small, something not seen at larger scales. For instance, in the big world, one can know where a train is on its route and how fast it's traveling, and given this data, one could predict when that train should arrive at the next station.

Now, swap out the train for an electron, and your predictive power disappears you can't know the exact location and momentum of a given electron, but you could calculate the probability that the particle may appear in a certain spot, traveling at a particular rate. In this way, you could gain a hazy idea of what the electron might be up to.

Just as uncertainty pervades the subatomic world, it also seeps into our decision-making process, whether we're debating which new series to binge-watch or casting our vote in a presidential election. Here's where quantum mechanics comes in. Unlike classical theories of decision-making, the quantum world makes room for a certain degree of uncertainty.

Related: The Funniest Theories in Physics

Classical psychology theories rest on the idea that people make decisions in order to maximize "rewards" and minimize "punishments" in other words, to ensure their actions result in more positive outcomes than negative consequences. This logic, known as "reinforcement learning," falls in line with Pavlonian conditioning, wherein people learn to predict the consequences of their actions based on past experiences, according to a 2009 report in the Journal of Mathematical Psychology.

If truly constrained by this framework, humans would consistently weigh the objective values of two options before choosing between them. But in reality, people don't always work that way; their subjective feelings about a situation undermine their ability to make objective decisions.

Consider an example:

Imagine you're placing bets on whether a tossed coin will land on heads or tails. Heads gets you $200, tails costs you $100, and you can choose to toss the coin twice. When placed in this scenario, most people choose to take the bet twice regardless of whether the initial throw results in a win or a loss, according to a study published in 1992 in the journal Cognitive Psychology. Presumably, winners bet a second time because they stand to gain money no matter what, while losers bet in attempt to recover their losses, and then some. However, if players aren't allowed to know the result of the first coin flip, they rarely make the second gamble.

When known, the first flip does not sway the choice that follows, but when unknown, it makes all the difference. This paradox does not fit within the framework of classical reinforcement learning, which predicts that the objective choice should always be the same. In contrast, quantum mechanics takes uncertainty into account and actually predicts this odd outcome.

"One could say that the 'quantum-based' model of decision-making refers essentially to the use of quantum probability in the area of cognition," Emmanuel Haven and Andrei Khrennikov, co-authors of the textbook "Quantum Social Science" (Cambridge University Press, 2013), told Live Science in an email.

Related: The 18 Biggest Unsolved Mysteries in Physics

Just as a particular electron might be here or there at a given moment, quantum mechanics assumes that the first coin toss resulted in both a win and a loss, simultaneously. (In other words, in the famous thought experiment, Schrdinger's cat is both alive and dead.) While caught in this ambiguous state, known as "superposition," an individual's final choice is unknown and unpredictable. Quantum mechanics also acknowledges that people's beliefs about the outcome of a given decision whether it will be good or bad often reflect what their final choice ends up being. In this way, people's beliefs interact, or become "entangled," with their eventual action.

Subatomic particles can likewise become entangled and influence each other's behavior even when separated by great distances. For instance, measuring the behavior of a particle located in Japan would alter the behavior of its entangled partner in the United States. In psychology, a similar analogy can be drawn between beliefs and behaviors. "It is precisely this interaction," or state of entanglement, "which influences the measurement outcome," Haven and Khrennikov said. The measurement outcome, in this case, refers to the final choice an individual makes. "This can be precisely formulated with the aid of quantum probability."

Scientists can mathematically model this entangled state of superposition in which two particles affect each other even if theyre separated by a large distance as demonstrated in a 2007 report published by the Association for the Advancement of Artificial Intelligence. And remarkably, the final formula accurately predicts the paradoxical outcome of the coin toss paradigm. "The lapse in logic can be better explained by using the quantum-based approach," Haven and Khrennikov noted.

In their new study, Zhang and his colleagues pitted two quantum-based models of decision-making against 12 classical psychology models to see which best predicted human behavior during a psychological task. The experiment, known as the Iowa Gambling Task, is designed to evaluate people's ability to learn from mistakes and adjust their decision-making strategy over time.

In the task, participants draw from four decks of cards. Each card either earns the player money or costs them money, and the object of the game is to earn as much money as possible. The catch lies in how each deck of cards is stacked. Drawing from one deck may earn a player large sums of money in the short term, but it will cost them far more cash by the end of the game. Other decks deliver smaller sums of money in the short-term, but fewer penalties overall. Through game play, winners learn to mostly draw from the "slow and steady" decks, while losers draw from the decks that earn them quick cash and steep penalties.

Historically, those with drug addictions or brain damage perform worse on the Iowa Gambling Task than healthy participants, which suggests that their condition somehow impairs decision-making abilities, as highlighted in a study published in 2014 in the journal Applied Neuropsychology: Child. This pattern held true in Zhang's experiment, which included about 60 healthy participants and 40 who were addicted to nicotine.

The two quantum models made similar predictions to the most accurate among the classical models, the authors noted. "Although the [quantum] models did not overwhelmingly outperform the [classical] ... one should be aware that the [quantum reinforcement learning] framework is still in its infancy and undoubtedly deserves additional studies," they added.

Related: 10 Things You Didn't Know About the Brain.

To bolster the value of their study, the team took brain scans of each participant as they completed the Iowa Gambling Task. In doing so, the authors attempted to peek at what was happening inside the brain as participants learned and adjusted their game-play strategy over time. Outputs generated by the quantum model predicted how this learning process would unfold, and thus, the authors theorized that hotspots of brain activity might somehow correlate with the models' predictions.

The scans did reveal a number of active brain areas in the healthy participants during game play, including activation of several large folds within the frontal lobe known to be involved in decision-making. In the smoking group, however, no hotspots of brain activity seemed tied to predictions made by the quantum model. As the model reflects participants' ability to learn from mistakes, the results may illustrate decision-making impairments in the smoking group, the authors noted.

However, "further research is warranted" to determine what these brain activity differences truly reflect in smokers and non-smokers, they added. "The coupling of the quantum-like models with neurophysiological processes in the brain ... is a very complex problem," Haven and Khrennikov said. "This study is of great importance as the first step towards its solution."

Models of classical reinforcement learning have shown "great success" in studies of emotion, psychiatric disorders, social behavior, free will and many other cognitive functions, Zhang said. "We hope that quantum reinforcement learning will also shed light on [these fields], providing unique insights."

In time, perhaps quantum mechanics will help explain pervasive flaws in human logic, as well as how that fallibility manifests at the level of individual neurons.

Originally published on Live Science.

View original post here:

What is quantum cognition? Physics theory could predict human behavior. - Livescience.com

New Centers Lead the Way towards a Quantum Future – Energy.gov

The world of quantum is the world of the very, very small. At sizes near those of atoms and smaller, the rules of physics start morphing into something unrecognizableat least to us in the regular world. While quantum physics seems bizarre, it offers huge opportunities.

Quantum physics may hold the key to vast technological improvements in computing, sensing, and communication. Quantum computing may be able to solve problems in minutes that would take lifetimes on todays computers. Quantum sensors could act as extremely high-powered antennas for the military. Quantum communication systems could be nearly unhackable. But we dont have the knowledge or capacity to take advantage of these benefitsyet.

The Department of Energy (DOE) recently announced that it will establish Quantum Information Science Centers to help lay the foundation for these technologies. As Congress put forth in the National Quantum Initiative Act, the DOEs Office of Science will make awards for at least two and up to five centers.

These centers will draw on both quantum physics and information theory to give us a soup-to-nuts understanding of quantum systems. Teams of researchers from universities, DOE national laboratories, and private companies will run them. Their expertise in quantum theory, technology development, and engineering will help each center undertake major, cross-cutting challenges. The centers work will range from discovery research up to developing prototypes. Theyll also address a number of different technical areas. Each center must tackle at least two of these subjects: quantum communication, quantum computing and emulation, quantum devices and sensors, materials and chemistry for quantum systems, and quantum foundries for synthesis, fabrication, and integration.

The impacts wont stop at the centers themselves. Each center will have a plan in place to transfer technologies to industry or other research partners. Theyll also work to leverage DOEs existing facilities and collaborate with non-DOE projects.

As the nations largest supporter of basic research in the physical sciences, the Office of Science is thrilled to head this initiative. Although quantum physics depends on the behavior of very small things, the Quantum Information Science Centers will be a very big deal.

The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://www.energy.gov/science.

Read the original:

New Centers Lead the Way towards a Quantum Future - Energy.gov

A Tiny Glass Bead Goes as Still as Nature Allows – WIRED

Inside a small metal box on a laboratory table in Vienna, physicist Markus Aspelmeyer and his team have engineered, perhaps, the quietest place on earth.

The area in question is a microscopic spot in the middle of the box. Here, levitating in midairexcept there is no air because the box is in a vacuumis a tiny glass bead a thousand times smaller than a grain of sand. Aspelmeyers apparatus uses lasers to render this bead literally motionless. It is as still as it could possibly be, as permitted by the laws of physics: Its reached what physicists call the beads motional ground state. The ground state is the limit where you cannot extract any more energy from an object, says Aspelmeyer, who works at the University of Vienna. They can maintain the beads motionlessness for hours at a time.

This stillness is different from anything youve ever perceivedoverlooking that lake in the mountains, sitting in a sound-proofed studio, or even just staring at your laptop as it rests on the table. As calm as that table seems, if you could zoom in on it, you would see its surface being attacked by air molecules that circulate via your ventilation system, says Aspelmeyer. Look hard enough and youll see microscopic particles or tiny pieces of lint rolling around. In our day-to-day lives, stillness is an illusion. Were simply too large to notice the chaos.

Kahan Dare and Manuel Reisenbauer, physicists at the University of Vienna, adjust the apparatus where the levitated nanoparticle sits.

But this bead is truly still, regardless of whether you are judging it as a human or a dust mite. And at this level of stillness, our conventional wisdom about motion breaks down, as the bizarre rules of quantum mechanics kick in. For one thing, the bead becomes delocalized, says Aspelmeyer. The bead spreads out. It no longer has a definite positionlike a ripple in a pond, which stretches over an expanse of water rather than being at a particular location. Instead of maintaining a sharp boundary between bead and vacuum, the beads outline becomes cloudy and diffuse.

Technically, although the bead is at the limit of its motionlessness, it still moves about a thousandth of its own diameter. Physicists have a cool name for it. Its called the vacuum energy of the system, says Aspelmeyer. Put another way, nature does not allow any object to have completely zero motion. There must always be some quantum jiggle.

The beads stillness comes with another caveat: Aspelmeyers team has only forced the bead into its motional ground state along one dimension, not all three. But even achieving this level of stillness took them 10 years. One major challenge was simply getting the bead to stay levitated inside the laser beam, says physicist Uro Deli of the University of Vienna. Deli has worked on the experiment since its nascencefirst as an undergraduate student, then a PhD student, and now as a postdoc researcher.

Read more here:

A Tiny Glass Bead Goes as Still as Nature Allows - WIRED

Scientists cooled a nanoparticle to the quantum limit – Science News

A tiny nanoparticle has been chilled tothe max.

Physicists cooled a nanoparticle to thelowest temperature allowed by quantum mechanics. The particles motion reachedwhats known as the ground state, or lowest possible energy level.

In a typical material, the amount thatits atoms jostle around indicates its temperature. But in the case of thenanoparticle, scientists can define an effective temperature based on themotion of the entire nanoparticle, which is made up of about 100 million atoms.That temperature reached twelve-millionths of a kelvin, scientists reportJanuary 30 in Science.

Levitating it with a laser inside of aspecially designed cavity, Markus Aspelmeyer of the University of Vienna andcolleagues reduced the nanoparticles motion to the ground state, a minimum level set by theHeisenberg uncertainty principle, which states that theres a limit to how wellyou can simultaneously know the position and momentum of an object.

While quantum mechanics is unmistakablein tiny atoms and electrons, its effects are harder to observe on larger scales.To better understand the theory, physicists have previously isolated its effects in other solid objects, such as vibrating membranes or beams (SN: 4/25/18). But nanoparticles have theadvantage that they can be levitated and precisely controlled with lasers.

Eventually, Aspelmeyer and colleaguesaim to use cooled nanoparticles to study how gravity behaves for quantumobjects, a poorly understood realm of physics. This is the really long-termdream, he says.

The rest is here:

Scientists cooled a nanoparticle to the quantum limit - Science News

Have We Solved the Black Hole Information Paradox? – Scientific American

Black holes, some of the most peculiar objects in the universe, pose a paradox for physicists. Two of our best theories give us two differentand seemingly contradictorypictures of how these objects work. Many scientists, including myself, have been trying to reconcile these visions, not just to understand black holes themselves, but also to answer deeper questions, such as What is spacetime? While I and other researchers made some partial progress over the years, the problem persisted. In the past year or so, however, I have developed a framework that I believe elegantly addresses the problem and gives us a glimpse of the mystery of how spacetime emerges at the most fundamental level.

Here is the problem: From the perspective of general relativity, black holes arise if the density of matter becomes too large and gravity collapses the material all the way toward its central point. When this happens, gravity is so strong in this region that nothingeven lightcan escape. The inside of the black hole, therefore, cannot be seen from the outside, even in principle, and the boundary, called the event horizon, acts as a one-way membrane: nothing can go from the interior to the exterior, but there is no problem in falling through it from the exterior to the interior.

But when we consider the effect of quantum mechanics, the theory governing elementary particles, we get another picture. In 1974, Stephen Hawking presented a calculation that made him famous. He discovered that, if we include quantum mechanical effects, a black hole in fact radiates, although very slowly. As a result, it gradually loses its mass and eventually evaporates. This conclusion has been checked by multiple methods now, and its basic validity is beyond doubt. The odd thing, however, is that in Hawkings calculation, the radiation emitted from a black hole does not depend on how the object was created. This means that two black holes created from different initial states can end up with the identical final radiation.

Is this a problem? Yes, it is. Modern physics is built on the assumption that if we have perfect knowledge about a system, then we can predict its future and infer its past by solving the equation of motion. Hawkings result would mean that this basic tenet is incorrect. Many of us thought that this problem was solved in 1997 when Juan Maldacena discovered a new way to view the situation, which seemed to prove no information was lost.

Case closed? Not quite. In 2012, Ahmed Almheiri and collaborators at the University of California, Santa Barbara, presented in their influential paper a strong argument that if the information is preserved in the Hawking emission process, then it is inconsistent with the smoothness of the horizonthe notion that an object can pass through the event horizon without being affected. Given that the option of information loss is out of the question, they argued that the black hole horizon is in fact not a one-way membrane but something like an unbreakable wall, which they called a firewall.

This confused theorists tremendously. As much as they disliked information loss, they abhorred firewalls too. Among other things, the firewall idea implies that Einsteins general relativity is completely wrong, at least at the horizon of a black hole. In fact, this is utterly counterintuitive. For a large black hole, gravity at the horizon is actually very weak because it lies far away from the central point, where all the matter is located. A region near the horizon thus looks pretty much like empty space, and yet the firewall argument says that space must abruptly end at the location of the horizon.

The main thrust of my new work is to realize that there are multiple layers of descriptions of a black hole, and the preservation of information and the smoothness of the horizon refer to theories at different layers. At one level, we can describe a black hole as viewed from a distance: the black hole is formed by collapse of matter, which eventually evaporates leaving the quanta of Hawking radiation in space. From this perspective, Maldacenas insight holds and there is no information loss in the process. That is because, in this picture, an object falling toward the black hole never enters the horizon, not because of a firewall but because of time delay between the clock of the falling object and that of a distant observer. The object seems to be slowly absorbed into the horizon, and its information is later sent back to space in the form of subtle correlations between particles of Hawking radiation.

On the other hand, the picture of the black hole interior emerges when looking at the system from the perspective of someone falling into it. Here we must ignore the fine details of the system that an infalling observer could not see because he or she has only finite time until they hit the singular point at the center of the black hole. This limits the amount of information they can access, even in principle. The world the infalling observer perceives, therefore, is the coarse-grained one. And in this picture, information need not be preserved because we already threw away some information even to arrive at this perspective. This is the way the existence of interior spacetime can be compatible with the preservation of information: they are the properties of the descriptions of nature at different levels!

To understand this concept better, the following analogy might help. Imagine water in a tank and consider a theory describing waves on the surface. At a fundamental level, water consists of a bunch of water molecules, which move, vibrate and collide with each other. With perfect knowledge of their properties, we can describe them deterministically without information loss. This description would be complete, and there would be no need to even introduce the concept of waves. On the other hand, we could focus on the waves by overlooking molecular level details and describing the water as a liquid. The atomic-level information, however, is not preserved in this description. For example, a wave can simply disappear, although the truth is that the coherent motion of water molecules that created the wave was transformed into a more random motion of each molecule without anything disappearing.

This framework tells us that the picture of spacetime offered by general relativity is not as fundamental as we might have thoughtit is merely a picture that emerges at a higher level in the hierarchical descriptions of nature, at least concerning the interior of a black hole. Similar ideas have been discussed earlier in varying forms, but the new framework allows us to explicitly identify the relevant microscopic degrees of freedomin other words, nature's fundamental building blocksparticipating in the emergence of spacetime, which surprisingly involves elements that we normally think to be located far away from the region of interest.

This new way of thinking about the paradox can also be applied to a recent setup devised by Geoff Penington, Stephen H. Shenker, Douglas Stanford and Zhenbin Yang in which Maldacenas scenario is applied more rigorously but in simplified systems. This allows us to identify which features of a realistic black hole are or are not captured by such analyses.

Beginning with the era of Descartes and Galilei, revolutions in physics have often been associated with new understandings of the concept of spacetime, and it seems that we are now in the middle of another such revolution. I strongly suspect that we may soon witness the emergence of a new understanding of nature at a qualitatively different and deeper level.

Read more:

Have We Solved the Black Hole Information Paradox? - Scientific American

What Is Quantum Computing and How Does it Work? – Built In

Accustomed to imagining worst-case scenarios, many cryptography experts are more concerned than usual these days: one of the most widely used schemes for safely transmitting data is poised to become obsolete once quantum computing reaches a sufficiently advanced state.

The cryptosystem known as RSA provides the safety structure for a host of privacy and communication protocols, from email to internet retail transactions. Current standards rely on the fact that no one has the computing power to test every possible way to de-scramble your data once encrypted, but a mature quantum computer could try every option within a matter of hours.

It should be stressed that quantum computers havent yet hit that level of maturity and wont for some time but when a large, stable device is built (or if its built, asan increasingly diminishing minority argue), its unprecedented ability to factor large numbers would essentially leave the RSA cryptosystem in tatters. Thankfully, the technology is still a ways away and the experts are on it.

Dont panic. Thats what Mike Brown, CTO and co-founder of quantum-focused cryptography company ISARA Corporation, advises anxious prospective clients. The threat is far from imminent. What we hear from the academic community and from companies like IBM and Microsoft is that a 2026-to-2030 timeframe is what we typically use from a planning perspective in terms of getting systems ready, he said.

Cryptographers from ISARA are among several contingents currently taking part in the Post-Quantum Cryptography Standardization project, a contest of quantum-resistant encryption schemes. The aim is to standardize algorithms that can resist attacks levied by large-scale quantum computers. The competition was launched in 2016 by the National Institute of Standards and Technology (NIST), a federal agency that helps establish tech and science guidelines, and is now gearing up for its third round.

Indeed, the level of complexity and stability required of a quantum computer to launch the much-discussed RSA attack is very extreme, according to John Donohue, scientific outreach manager at the University of Waterloos Institute for Quantum Computing. Even granting that timelines in quantum computing particularly in terms of scalability are points of contention, the community is pretty comfortable saying thats not something thats going to happen in the next five to 10 years, he said.

When Google announced that it had achieved quantum supremacy or that it used a quantum computer to run, in minutes, an operation that would take thousands of years to complete on a classical supercomputer that machine operated on 54 qubits, the computational bedrocks of quantum computing. While IBMs Q 53 system operates at a similar level, many current prototypes operate on as few as 20 or even five qubits.

But how many qubits would be needed to crack RSA? Probably on the scale of millions of error-tolerant qubits, Donohue told Built In.

Scott Aaronson, a computer scientist at the University of Texas at Austin, underscored the same last year in his popular blog after presidential candidate Andrew Yang tweeted that no code is uncrackable in the wake of Googles proof-of-concept milestone.

Thats the good news. The bad news is that, while cryptography experts gain more time to keep our data secure from quantum computers, the technologys numerous potential upsides ranging from drug discovery to materials science to financial modeling is also largely forestalled. And that question of error tolerance continues to stand as quantum computings central, Herculean challenge. But before we wrestle with that, lets get a better elemental sense of the technology.

Quantum computers process information in a fundamentally different way than classical computers. Traditional computers operate on binary bits information processed in the form of ones or zeroes. But quantum computers transmit information via quantum bits, or qubits, which can exist either as one or zero or both simultaneously. Thats a simplification, and well explore some nuances below, but that capacity known as superposition lies at the heart of quantums potential for exponentially greater computational power.

Such fundamental complexity both cries out for and resists succinct laymanization. When the New York Times asked 10 experts to explain quantum computing in the length of a tweet, some responses raised more questions than they answered:

Microsoft researcher David Reilly:

A quantum machine is a kind of analog calculator that computes by encoding information in the ephemeral waves that comprise light and matter at the nanoscale.

D-Wave Systems executive vice president Alan Baratz:

If were honest, everything we currently know about quantum mechanics cant fully describe how a quantum computer works.

Quantum computing also cries out for a digestible metaphor. Quantum physicist Shohini Ghose, of Wilfrid Laurier University, has likened the difference between quantum and classical computing to light bulbs and candles: The light bulb isnt just a better candle; its something completely different.

Rebecca Krauthamer, CEO of quantum computing consultancy Quantum Thought, compares quantum computing to a crossroads that allows a traveler to take both paths. If youre trying to solve a maze, youd come to your first gate, and you can go either right or left, she said. We have to choose one, but a quantum computer doesnt have to choose one. It can go right and left at the same time.

It can, in a sense, look at these different options simultaneously and then instantly find the most optimal path, she said. That's really powerful.

The most commonly used example of quantum superposition is Schrdingers cat:

Despite its ubiquity, many in the QC field arent so taken with Schrodingers cat. The more interesting fact about superposition rather than the two-things-at-once point of focus is the ability to look at quantum states in multiple ways, and ask it different questions, said Donohue. That is, rather than having to perform tasks sequentially, like a traditional computer, quantum computers can run vast numbers of parallel computations.

Part of Donohues professional charge is clarifying quantums nuances, so its worth quoting him here at length:

In superposition I can have state A and state B. I can ask my quantum state, are you A or B? And it will tell me, I'm a or I'm B. But I might have a superposition of A + B in which case, when I ask it, Are you A or B? Itll tell me A or B randomly.

But the key of superposition is that I can also ask the question, Are you in the superposition state of A + B? And then in that case, they'll tell me, Yes, I am the superposition state A + B.

But theres always going to be an opposite superposition. So if its A + B, the opposite superposition is A - B.

Thats about as simplified as we can get before trotting out equations. But the top-line takeaway is that that superposition is what lets a quantum computer try all paths at once.

Thats not to say that such unprecedented computational heft will displace or render moot classical computers. One thing that we can really agree on in the community is that it wont solve every type of problem that we run into, said Krauthamer.

But quantum computing is particularly well suited for certain kinds of challenges. Those include probability problems, optimization (what is, say, the best possible travel route?) and the incredible challenge of molecular simulation for use cases like drug development and materials discovery.

The cocktail of hype and complexity has a way of fuzzing outsiders conception of quantum computing which makes this point worth underlining: quantum computers exist, and they are being used right now.

They are not, however, presently solving climate change, turbocharging financial forecasting probabilities or performing other similarly lofty tasks that get bandied about in reference to quantum computings potential. QC may have commercial applications related to those challenges, which well explore further below, but thats well down the road.

Today, were still in whats known as the NISQ era Noisy, Intermediate-Scale Quantum. In a nutshell, quantum noise makes such computers incredibly difficult to stabilize. As such, NISQ computers cant be trusted to make decisions of major commercial consequence, which means theyre currently used primarily for research and education.

The technology just isnt quite there yet to provide a computational advantage over what could be done with other methods of computation at the moment, said Dohonue. Most [commercial] interest is from a long-term perspective. [Companies] are getting used to the technology so that when it does catch up and that timeline is a subject of fierce debate theyre ready for it.

Also, its fun to sit next to the cool kids. Lets be frank. Its good PR for them, too, said Donohue.

But NISQ computers R&D practicality is demonstrable, if decidedly small-scale. Donohue cites the molecular modeling of lithium hydrogen. Thats a small enough molecule that it can also be simulated using a supercomputer, but the quantum simulation provides an important opportunity to check our answers after a classical-computer simulation. NISQs have also delivered some results for problems in high-energy particle physics, Donohue noted.

One breakthrough came in 2017, when researchers at IBM modeled beryllium hydride, the largest molecule simulated on a quantum computer to date. Another key step arrived in 2019, when IonQ researchers used quantum computing to go bigger still, by simulating a water molecule.

These are generally still small problems that can be checked using classical simulation methods. But its building toward things that will be difficult to check without actually building a large particle physics experiment, which can get very expensive, Donohue said.

And curious minds can get their hands dirty right now. Users can operate small-scale quantum processors via the cloud through IBMs online Q Experience and its open-source software Quiskit. Late last year, Microsoft and Amazon both announced similar platforms, dubbed Azure Quantum and Braket. Thats one of the cool things about quantum computing today, said Krauthamer. We can all get on and play with it.

RelatedQuantum Computing and the Gaming Industry

Quantum computing may still be in its fussy, uncooperative stage, but that hasnt stopped commercial interests from diving in.

IBM announced at the recent Consumer Electronics Show that its so-called Q Network had expanded to more than 100 companies and organizations. Partners now range from Delta Air Lines to Anthem health to Daimler AG, which owns Mercedes-Benz.

Some of those partnerships hinge on quantum computings aforementioned promise in terms of molecular simulation. Daimler, for instance, is hoping the technology will one day yield a way to produce better batteries for electric vehicles.

Elsewhere, partnerships between quantum computing startups and leading companies in the pharmaceutical industry like those established between 1QBit and Biogen, and ProteinQure and AstraZeneca point to quantum molecular modelings drug-discovery promise, distant though it remains. (Today, drug development is done through expensive, relatively low-yield trial-and-error.)

Researchers would need millions of qubits to compute the chemical properties of a novel substance, noted theoretical physicist Sabine Hossenfelder in the Guardian last year. But the conceptual underpinning, at least, is there. A quantum computer knows quantum mechanics already, so I can essentially program in how another quantum system would work and use that to echo the other one, explained Donohue.

Theres also hope that large-scale quantum computers will help accelerate AI, and vice versa although experts disagree on this point. The reason theres controversy is, things have to be redesigned in a quantum world, said Krauthamer, who considers herself an AI-quantum optimist. We cant just translate algorithms from regular computers to quantum computers because the rules are completely different, at the most elemental level.

Some believe quantum computers can help combat climate change by improving carbon capture. Jeremy OBrien, CEO of Palo Alto-based PsiQuantum, wrote last year that quantum simulation of larger molecules if achieved could help build a catalyst for scrubbing carbon dioxide directly from the atmosphere.

Long-term applications tend to dominate headlines, but they also lead us back to quantum computings defining hurdle and the reason coverage remains littered with terms like potential and promise: error correction.

Qubits, it turns out, are higher maintenance than even the most meltdown-prone rock star. Any number of simple actions or variables can send error-prone qubits falling into decoherence, or the loss of a quantum state (mainly that all-important superposition). Things that can cause a quantum computer to crash include measuring qubits and running operations in other words: using it. Even small vibrations and temperature shifts will cause qubits to decohere, too.

Thats why quantum computers are kept isolated, and the ones that run on superconducting circuits the most prominent method, favored by Google and IBM have to be kept at near-absolute zero (a cool -460 degrees Fahrenheit).

Thechallenge is two-fold, according to Jonathan Carter, a scientist at Berkeley Quantum. First, individual physical qubits need to have better fidelity. That would conceivably happen either through better engineering, discovering optimal circuit layout, and finding the optimal combination of components. Second, we have to arrange them to form logical qubits.

Estimates range from hundreds to thousands to tens of thousands of physical qubits required to form one fault-tolerant qubit. I think its safe to say that none of the technology we have at the moment could scale out to those levels, Carter said.

From there, researchers would also have to build ever-more complex systems to handle the increase in qubit fidelity and numbers. So how long will it take until hardware-makers actually achieve the necessary error correction to make quantum computers commercially viable?

Some of these other barriers make it hard to say yes to a five- or 10-year timeline, Carter said.

Donohue invokes and rejects the same figure. Even the optimist wouldnt say its going to happen in the next five to 10 years, he said. At the same time, some small optimization problems, specifically in terms of random number generation could happen very soon.

Weve already seen some useful things in that regard, he said.

For people like Michael Biercuk, founder of quantum-engineering software company Q-CTRL, the only technical commercial milestone that matters now is quantum advantage or, as he uses the term, when a quantum computer provides some time or cost advantage over a classical computer. Count him among the optimists: he foresees a five-to-eight year time scale to achieve such a goal.

Another open question: Which method of quantum computing will become standard? While superconducting has borne the most fruit so far, researchers are exploring alternative methods that involve trapped ions, quantum annealing or so-called topological qubits. In Donohues view, its not necessarily a question of which technology is better so much as one of finding the best approach for different applications. For instance, superconducting chips naturally dovetail with the magnetic field technology that underpins neuroimaging.

The challenges that quantum computing faces, however, arent strictly hardware-related. The magic of quantum computing resides in algorithmic advances, not speed, Greg Kuperberg, a mathematician at the University of California at Davis, is quick to underscore.

If you come up with a new algorithm, for a question that it fits, things can be exponentially faster, he said, using exponential literally, not metaphorically. (There are currently 63 algorithms listed and 420 papers cited at Quantum Algorithm Zoo, an online catalog of quantum algorithms compiled by Microsoft quantum researcher Scott Jordan.)

Another roadblock, according to Krauthamer, is general lack of expertise. Theres just not enough people working at the software level or at the algorithmic level in the field, she said. Tech entrepreneur Jack Hidaritys team set out to count the number of people working in quantum computing and found only about 800 to 850 people, according to Krauthamer. Thats a bigger problem to focus on, even more than the hardware, she said. Because the people will bring that innovation.

While the community underscores the importance of outreach, the term quantum supremacy has itself come under fire. In our view, supremacy has overtones of violence, neocolonialism and racism through its association with white supremacy, 13 researchers wrote in Nature late last year. The letter has kickstarted an ongoing conversation among researchers and academics.

But the fields attempt to attract and expand also comes at a time of uncertainty in terms of broader information-sharing.

Quantum computing research is sometimes framed in the same adversarial terms as conversations about trade and other emerging tech that is, U.S. versus China. An oft-cited statistic from patent analytics consultancy Patinformatics states that, in 2018, China filed 492 patents related to quantum technology, compared to just 248 in the United States. That same year, the think tank Center for a New American Security published a paper that warned, China is positioning itself as a powerhouse in quantum science. By the end of 2018, the U.S. passed and signed into law the National Quantum Initiative Act. Many in the field believe legislators were compelled due to Chinas perceived growing advantage.

The initiative has spurred domestic research the Department of Energy recently announced up to $625 million in funding to establish up to five quantum information research centers but the geopolitical tensions give some in the quantum computing community pause, namely for fear of collaboration-chilling regulation. As quantum technology has become prominent in the media, among other places, there has been a desire suddenly among governments to clamp down, said Biercuk, who has warned of poorly crafted and nationalistic export controls in the past.

What they dont understand often is that quantum technology and quantum information in particular really are deep research activities where open transfer of scientific knowledge is essential, he added.

The National Science Foundation one of the government departments given additional funding and directives under the act generally has a positive track record in terms of avoiding draconian security controls, Kuperberg said. Even still, the antagonistic framing tends to obscure the on-the-ground facts. The truth behind the scenes is that, yes, China would like to be doing good research and quantum computing, but a lot of what theyre doing is just scrambling for any kind of output, he said.

Indeed, the majority of the aforementioned Chinese patents are quantum tech, but not quantum computing tech which is where the real promise lies.

The Department of Energy has an internal list of sensitive technologies that it could potentially restrict DOE researchers from sharing with counterparts in China, Russia, Iran and North Korea. It has not yet implemented that curtailment, however, DOE Office of Science director Chris Fall told the House committee on science, space and technology and clarified to Science, in January.

Along with such multi-agency-focused government spending, theres been a tsunami of venture capital directed toward commercial quantum-computing interests in recent years. A Nature analysis found that, in 2017 and 2018, private funding in the industry hit at least $450 million.

Still, funding concerns linger in some corners. Even as Googles quantum supremacy proof of concept has helped heighten excitement among enterprise investors, Biercuk has also flagged the beginnings of a contraction in investment in the sector.

Even as exceptional cases dominate headlines he points to PsiQuantums recent $230 million venture windfall there are lesser-reported signs of struggle. I know of probably four or five smaller shops that started and closed within about 24 months; others were absorbed by larger organizations because they struggled to raise, he said.

At the same time, signs of at least moderate investor agitation and internal turmoil have emerged. The Wall Street Journal reported in January that much-buzzed quantum computing startup Rigetti Computing saw its CTO and COO, among other staff, depart amid concerns that the companys tech wouldnt be commercially viable in a reasonable time frame.

Investor expectations had become inflated in some instances, according to experts. Some very good teams have faced more investor skepticism than I think has been justified This is not six months to mobile application development, Biercuk said.

In Kuperbergs view, part of the problem is that venture capital and quantum computing operate on completely different timelines. Putting venture capital into this in the hope that some profitable thing would arise quickly, that doesnt seem very natural to me in the first place, he said, adding the caveat that he considers the majority of QC money prestige investment rather than strictly ROI-focused.

But some startups themselves may have had some hand in driving financiers over-optimism. I wont name names, but there definitely were some people giving investors outsize expectations, especially when people started coming up with some pieces of hardware, saying that advantages were right around the corner, said Donohe. That very much rubbed the academic community the wrong way.

Scott Aaronson recently called out two prominent startups for what he described as a sort of calculated equivocation. He wrote of a pattern in which a party will speak of a quantum algorithms promise, without asking whether there are any indications that your approach will ever be able to exploit interference of amplitudes to outperform the best classical algorithm.

And, mea culpa, some blame for the hype surely lies with tech media. Trying to crack an area for a lay audience means you inevitably sacrifice some scientific precision, said Biercuk. (Thanks for understanding.)

Its all led to a willingness to serve up a glass of cold water now and again. As Juani Bermejo-Vega, a physicist and researcher at University of Granada in Spain, recently told Wired, the machine on which Google ran its milestone proof of concept is mostly still a useless quantum computer for practical purposes.

Bermejo-Vegas quote came in a story about the emergence of a Twitter account called Quantum Bullshit Detector, which decrees, @artdecider-like, a bullshit or not bullshit quote tweet of various quantum claims. The fact that leading quantum researchers are among the accounts 9,000-plus base of followers would seem to indicate that some weariness exists among the ranks.

But even with the various challenges, cautious optimism seems to characterize much of the industry. For good and ill, Im vocal about maintaining scientific and technical integrity while also being a true optimist about the field and sharing the excitement that I have and to excite others about whats coming, Biercuk said.

This year could prove to be formative in the quest to use quantum computers to solve real-world problems, said Krauthamer. Whenever I talk to people about quantum computing, without fail, they come away really excited. Even the biggest skeptics who say, Oh no, theyre not real. Its not going to happen for a long time.

Related20 Quantum Computing Companies to Know

Read the original post:

What Is Quantum Computing and How Does it Work? - Built In

Why physicists are determined to prove Galileo and Einstein wrong – Livescience.com

In the 17th century, famed astronomer and physicist Galileo Galilei is said to have climbed to the top of the Tower of Pisa and dropped two different-sized cannonballs. He was trying to demonstrate his theory which Albert Einstein later updated and added to his theory of relativity that objects fall at the same rate regardless of their size.

Now, after spending two years dropping two objects of different mass into a free fall in a satellite, a group of scientists has concluded that Galileo and Einstein were right: The objects fell at a rate that was within two-trillionths of a percent of each other, according to a new study.

This effect has been confirmed time and time again, as has Einstein's theory of relativity yet scientists still aren't convinced that there isn't some kind of exception somewhere. "Scientists have always had a difficult time actually accepting that nature should behave that way," said senior author Peter Wolf, research director at the French National Center for Scientific Research's Paris Observatory.

Related: 8 Ways You Can See Einstein's Theory of Relativity in Real Life

That's because there are still inconsistencies in scientists' understanding of the universe.

"Quantum mechanics and general relativity, which are the two basic theories all of physics is built on today ...are still not unified," Wolf told Live Science. What's more, although scientific theory says the universe is made up mostly of dark matter and dark energy, experiments have failed to detect these mysterious substances.

"So, if we live in a world where there's dark matter around that we can't see, that might have an influence on the motion of [objects]," Wolf said. That influence would be "a very tiny one," but it would be there nonetheless. So, if scientists see test objects fall at different rates, that "might be an indication that we're actually looking at the effect of dark matter," he added.

Wolf and an international group of researchers including scientists from France's National Center for Space Studies and the European Space Agency set out to test Einstein and Galileo's foundational idea that no matter where you do an experiment, no matter how you orient it and what velocity you're moving at through space, the objects will fall at the same rate.

The researchers put two cylindrical objects one made of titanium and the other platinum inside each other and loaded them onto a satellite. The orbiting satellite was naturally "falling" because there were no forces acting on it, Wolf said. They suspended the cylinders within an electromagnetic field and dropped the objects for 100 to 200 hours at a time.

From the forces the researchers needed to apply to keep the cylinders in place inside the satellite, the team deduced how the cylinders fell and the rate at which they fell, Wolf said.

And, sure enough, the team found that the two objects fell at almost exactly the same rate, within two-trillionths of a percent of each other. That suggested Galileo was correct. What's more, they dropped the objects at different times during the two-year experiment and got the same result, suggesting Einstein's theory of relativity was also correct.

Their test was an order of magnitude more sensitive than previous tests. Even so, the researchers have published only 10% of the data from the experiment, and they hope to do further analysis of the rest.

Not satisfied with this mind-boggling level of precision, scientists have put together several new proposals to do similar experiments with two orders of magnitude greater sensitivity, Wolf said. Also, some physicists want to conduct similar experiments at the tiniest scale, with individual atoms of different types, such as rubidium and potassium, he added.

The findings were published Dec. 2 in the journal Physical Review Letters.

Originally published on Live Science.

Read more:

Why physicists are determined to prove Galileo and Einstein wrong - Livescience.com

Fibre on steroids Wits student uses quantum physics for massive improvements – MyBroadband

A research team at Wits University has discovered a way to improve data transmission across fibre networks.

The team comprising of a PhD student at the university, as well as several colleagues from Wits and Huazh University of Science and Technology in Wuhan, China.

This research uses quantum physics to improve data security across fibre networks without the need to replace legacy fibre infrastructure.

Our team showed that multiple patterns of light are accessible through conventional optical fibre that can only support a single pattern, said Wits PhD student, Isaac Nape.

We achieved this quantum trick by engineering the entanglement of two photons. We sent the polarised photon down the fibre line and accessed many other patterns with the other photon.

Entanglement refers to particles interacting in a way that the quantum state of each particle cannot be described without reference to the state of others even if the particles are separated by large distances.

In this scenario, the researchers manipulated the qualities of the photon on the inside of the fibre line by changing the qualities of its entangled counterpart in free space.

In essence, the research introduces the concept of communicating across legacy fibre networks with multi-dimensional entangled states, bringing together the benefits of existing quantum communication with polarised photons with that of high-dimension communication using patterns of light, said team leader Wits Professor Andrew Forbes.

Quantum entanglement has been explored extensively over the past few decades, with the most notable success story being increased communications security through Quantum Key Distribution (QKD).

This method uses qubits 2D quantum states to transfer a limited amount of information across fibre links by using polarisation as a degree of freedom.

Another degree of freedom is the spatial pattern of light, but while this has the benefit of high-dimensional encoding, it requires a custom fibre optical cable making it unsuitable to already existing networks.

Our team found a new way to balance these two extremes, by combining polarisation qubits with high-dimensional spatial modes to create multi-dimensional hybrid quantum states, said Nape.

The trick was to twist the one photon in polarisation and twist the other in pattern, forming spirally light that is entangled in two degrees of freedom, said Forbes.

Since the polarisation entangled photon has only one pattern, it could be sent down the long-distance single-mode fibre, while the twisted light photon could be measured without the fibre, accessing multi-dimensional twisted patterns in the free-space.

These twists carry orbital angular momentum (or spin), a promising candidate for encoding information.

Go here to see the original:

Fibre on steroids Wits student uses quantum physics for massive improvements - MyBroadband

Stephen Hawking thought black holes were ‘hairy’. New study suggests he was right. – Big Think

What's it like on the outer edges of a black hole?

This mysterious area, known as the event horizon, is commonly thought of as a point of no return, past which nothing can escape. According to Einstein's theory of general relativity, black holes have smooth, neatly defined event horizons. On the outer side, physical information might be able to escape the black hole's gravitational pull, but once it crosses the event horizon, it's consumed.

"This was scientists' understanding for a long time," Niayesh Afshordi, a physics and astronomy professor at the University of Waterloo, told Daily Galaxy. The American theoretical physicist John Wheeler summed it up by saying: "Black holes have no hair." But then, as Afshordi noted, Stephen Hawking "used quantum mechanics to predict that quantum particles will slowly leak out of black holes, which we now call Hawking radiation."

ESO, ESA/Hubble, M. Kornmesser

In the 1970s, Stephen Hawking famously proposed that black holes aren't truly "black." In simplified terms, the theoretical physicist reasoned that, due to quantum mechanics, black holes actually emit tiny amounts of black-body radiation, and therefore have a non-zero temperature. So, contrary to Einstein's view that black holes are neatly defined and are not surrounded by loose materials, Hawking radiation suggests that black holes are actually surrounded by quantum "fuzz" that consists of particles that escape the gravitational pull.

"If the quantum fuzz responsible for Hawking radiation does exist around black holes, gravitational waves could bounce off of it, which would create smaller gravitational wave signals following the main gravitational collision event, similar to repeating echoes," Afshordi said.

Credit: NASA's Goddard Space Flight Center/Jeremy Schnittman

A new study from Afshordi and co-author Jahed Abedi could provide evidence of these signals, called gravitational wave "echoes." Their analysis examined data collected by the LIGO and Virgo gravitational wave detectors, which in 2015 detected the first direct observation of gravitational waves from the collision of two distant neutron stars. The results, at least according to the researchers' interpretation, showed relatively small "echo" waves following the initial collision event.

"The time delay we expect (and observe) for our echoes ... can only be explained if some quantum structure sits just outside their event horizons," Afshordi told Live Science.

Afshordi et al.

Scientists have long studied black holes in an effort to better understand fundamental physical laws of the universe, especially since the introduction of Hawking radiation. The idea highlighted the extent to which general relativity and quantum mechanics conflict with each other.

Everywhere even in a vacuum, like an event horizon pairs of so-called "virtual particles" briefly pop in and out of existence. One particle in the pair has positive mass, the other negative. Hawking imagined a scenario in which a pair of particles emerged near the event horizon, and the positive particle had just enough energy to escape the black hole, while the negative one fell in.

Over time, this process would lead black holes to evaporate and vanish, given that the particle absorbed had a negative mass. It would also lead to some interesting paradoxes.

For example, quantum mechanics predicts that particles would be able to escape a black hole. This idea suggests that black holes eventually die, which would theoretically mean that the physical information within a black hole also dies. This violates a key idea in quantum mechanics which is that physical information can't be destroyed.

The exact nature of black holes remains a mystery. If confirmed, the recent discovery could help scientists better fuse these two models of the universe. Still, some researchers are skeptical of the recent findings.

"It is not the first claim of this nature coming from this group," Maximiliano Isi, an astrophysicist at MIT, told Live Science. "Unfortunately, other groups have been unable to reproduce their results, and not for lack of trying."

Isi noted that other papers examined the same data, but failed to find echoes. Afshordi told Galaxy Daily:

"Our results are still tentative because there is a very small chance that what we see is due to random noise in the detectors, but this chance becomes less likely as we find more examples. Now that scientists know what we're looking for, we can look for more examples, and have a much more robust confirmation of these signals. Such a confirmation would be the first direct probe of the quantum structure of space-time."

From Your Site Articles

Related Articles Around the Web

Original post:

Stephen Hawking thought black holes were 'hairy'. New study suggests he was right. - Big Think