What if youre living in a simulation, but theres no computer? – The Next Web

Swedish Philosopher Nick Bostroms simulation argument says we might be living in a computer-generated reality. Maybe hes right. There currently exists no known method by which we could investigate the parameters of our programming, so its up to each of us to decide whether to believe in The Matrix or not.

Perhaps its a bit more nuanced than that though. Maybe hes only half-wrong or half-right, depending on your philosophical view.

What if we are living in a simulation, but theres no computer (in the traditional sense) running it?

Heres the wackiest, most improbable theory I could cobble together from the weirdest papers Ive ever covered. I call it: Simulation Argument: Live and Unplugged.

Philosophy!

Bostroms hypothesis is actually quite complicated:

But it can be explained rather easily. According to him, one or more of the following statements must be true:

Bostroms basically saying that humans in the future will probablyrun ancestry simulations on their fancy futuristic computers. Unless they cant, dont want to, or humanity gets snuffed out before they get the chance.

Physics!

As many people have pointed out, theres no way to do the science when it comes to simulation hypothesis. Just like theres no way for the ants in an antcolony to understand why youve put them there, or whats going on beyond the glass, you and I cant slip the void to have a chat with the programmers responsible for coding us. Were constrained by physical rules, whether we understand them or not.

Quantum Physics!

Except, of course, in quantum mechanics. There, all the classical physics rules we spent millennia coming up with make almost no sense. In the reality you and I see every day, for example, an object cant be in two places at the same time. But the heart of quantum mechanics involves this very principal.

The universe at large appears to obey a different set of rules than the ones that directly apply to you and I in our everyday existence.

Astrophysics!

Scientists like to describe the universe in terms of rules because, from where were sitting, were basically looking at infinity from the perspective of an amoeba. Theres no ground-truth for us to compare notes against when we, for example, try to figure out how gravity works in and around a black hole. We use rules such as mathematics and the scientific method to determine whats really real.

So why are the rules different for people and stars than they are for singularities and wormholes? Or, perhaps more correctly: if the rules are the same for everything, why are they applied in different measures across different systems?

Wormholes, for example, could, in theory, allow objects to take shortcuts through physical spaces. And who knows whats actually on the other side of a black hole?

But you and I are stuck here with boring old gravity, only able to be in a single place at a time. Or are we?

Organic neural networks!

Humans, as a system, are actually incredibly connected. Not only are we tuned in somewhat to the machinations of our environment, but we can spread information about it across vast distances at incredible speeds. For example, no matter where you live, its possible for you to know the weather in New York, Paris, and on Mars in real-time.

Whats important there isnt how technologically advanced the smart phone or todays modern computers have become, but that we continue to find ways to increase and evolve our ability to share knowledge and information. Were noton Mars, but we know whats going on almost as if we were.

And, whats even more impressive, we can transfer that information acrossiterations. A child born today doesnt have to discover how to make fire and then spend their entire life developing the combustion engine. Its already been done. They can look forward and develop something new. Elon Musks already made a pretty good electric engine, so maybe our kids will figure out a fusion engine or something even better.

In AI terms, were essentially training new models based on the output from old models. And that makes humanity itself a neural network. Each generation of human adds selected information from the previous generations output to their input cycle and then, stack by stack, develop new methods and novel inferences.

The Multiverse!

Where it all comes together is in the wackiest idea of all: our universe is a neural network. And, because Im writing this on a Friday, Ill even raise the stakes and say our universe is one of many universes that, together, make up a grand neural network.

Thats a lot to unpack, but the gist involves starting with quantum mechanics and maintaining our assumptions as we zoom out beyond what we can observe.

We know that subatomic particles, in what we call the quantum realm, react differently when observed. Thats a feature of the universe that seems incredibly significant for anything that might be considered an observer.

If you imagine all subatomic systems as neural networks, with observation being the sole catalyst for execution, you get an incredibly complex computation mechanism thats, theoretically, infinitely scalable.

Rather than assume, as we zoom out, that every system is an individual neural network, it makes more sense to imagine each system as a layer inside of a largernetwork.

And, once you reach the biggest self-contained system we can imagine, the whole universe, you arrive at a single necessary conclusion: if the universe is a neural network, its output must go somewhere.

Thats where the multiverse comes in. We like to think of ourselves as characters in a computer simulation when we contemplate Bostroms theory. But what if were more like cameras? And not physical cameras like the one on your phone, but more like the term camera as it applies to when a developer sets a POV for players in a video game.

If our job is to observe, its unlikely were the entities the universe-as-a-neural-network outputs to. It stands to reason that wed be more likely to be considered tools or necessary byproducts in the grand scheme.

However, if we imagine our universe as simply another layer in an exponentially bigger neural network, it answers all the questions that derive from trying to shoehorn simulation theory into being a plausible explanation for our existence.

Most importantly: a naturally occurring, self-feeding, neural network doesnt require a computer at all.

In fact,neural networks almost never involve what we usually think of ascomputers. Artificial neural networks have only been around for a matter of decades, but organic neural networks, AKA brains, have been around for at least millions of years.

Wrap up this nonsense!

In conclusion, I think we can all agree that the most obvious answer to the question of life, the universe, and everything is the wackiest one. And, if you like wacky, youll love my theory.

Here it is: our universe is part of a naturally-occurring neural network spread across infinite or near-infinite universes. Each universe in this multiverse is a single layer designed to sift through data and produce a specific output. Within each of theselayers are infinite or near-infinite systems that comprise networks within the network.

Information travels between the multiverses layers through natural mechanisms. Perhaps wormholes are where data is received from other universes and black holes are where its sent for output extraction into other layers. Seems about as likely as us all living in a computer right?

Behind the scenes, in the places where scientists are currently looking for all the missing dark matter in the universe, are the underlying physical mechanisms that invisibly stitch together our observations (classical reality) with whatever ultimately lies beyond the great final output layer.

My guess: theres nobody on the receiving end, just a rubber hose connecting output to input.

Published April 2, 2021 20:06 UTC

Read more from the original source:

What if youre living in a simulation, but theres no computer? - The Next Web

Fifteen PhD and three Post-Doc positions within the Collaborative – Nature.com

Fifteen PhD and three Post-Doc positions within the Collaborative Research Center SFB 1432 (f/m/d)(PhD: 75%, E 13 TV-L, Post-Doc: full time, E 13 TV-L)

Reference number 2021/048. All positions are available from now on. PhD positions are initially limited to three years. Postdoctoral researcher positions have minimal duration of two years and offer the possibility of an extension of the contract subject to positive evaluation and further funding commitment. In principle, the Post-Doc positions can be divided into two half-time positions.

The University of Konstanz is one of eleven Universities of Excellence in Germany. Since 2007 it has been successful in the German Excellence Initiative and its follow-up programme, the Excellence Strategy.

The vision of the recently approved SFB 1432 is to investigate fluctuations of systems far from equilibrium or determined by strongly nonlinear interactions in a synergistic, interdisciplinary environment. It brings together different approaches from classical and quantum physics enriched by mathematics and aims for a deeper understanding of the respective domains as well as the quantum-to-classical transition. We will work out thenature of fluctuations and nonlinearities at the cross-roads of soft matter, quantum transport, magnetism, nanoelectronics, and mechanics, and ultrafast phenomena. This interplay is expected to lead to entirely novel physical phenomena and functionalities.There are fifteen open PhD positions and three postdoctoral researcher positions in experimental physics, theoretical physics, physical chemistry, and mathematics. The positions are at the University of Konstanz and at CRC partners at the TU Munich and the University of Gttingen. For detailed information visit:https://www.sfb1432.uni-konstanz.de/open-positions/

Your responsibilities(depending on the project)

Development and implementation of

laser-based measurement techniques (exp.) high-resolution electronic transport measurement methods (exp.) microwave excitation and detection techniques (exp.)

Application of state-of-the-art

microscopy techniques (exp.) nanofabrication and ultrathin film materials deposition (exp.) low-temperature and high-vacuum facilities (exp.)

Developing methods of quantum transport and quantum optics (theor.)

Performing numerical simulations and optimization tasks (math.)

Your Competencies Excellent academic record M. Sc. degree in physics or mathematics(for PhD positions) Excellent PhD thesis in physics or mathematics (for Post-Doc positions) Profound knowledge of relevant experimental or theoretical methods Research experience relevant for the respective project High motivation, strong interpersonal and influence skills, and great motivation Strong written and oral communication skills in English

We Offer Excellent personal development and qualification opportunities Academic career support by CRCintegrated graduate school Focused scientific training on topics of the CRC Seed Funding for own research ideas Research collaborations with top level research groups worldwide Professional infrastructure for conducting research Working in an interdisciplinary young team of highly motivated scientists Family-friendly measures and dual-career program

Questions can be directed to Mr. Prof. Wolfgang Belzig via email, sfb1432@uni-konstanz.de.

We look forward to receiving your application with a letter of motivation, your CV, transcripts and letters of recommendation combined to one single pdf document until 30 April 2021 via our Online Application Portal.

The University of Konstanz is committed to ensuring an environment that provides equal opportunities and promotes diversity as well as a good balance between university and family life. As an equal opportunity employer, we strive to increase the number of women working in research and teaching. We also support working couples through our dual career programme (https://www.uni-konstanz.de/en/equalopportunities/family/dualcareer/).Persons with disabilities are explicitly encouraged to apply. They will be given preference if appropriately qualified (contact + 49 7531 884016).

See the original post here:

Fifteen PhD and three Post-Doc positions within the Collaborative - Nature.com

Where did the antimatter go? – The Express Tribune

KARACHI:

In the words of one researcher, science is not meant to cure us of mystery. Instead, he argues, it is supposed to reinvent and reinvigorate.

For us lay people, science remains a mystery sans reinvention. For reasons mundane and existential, most of us shy away from asking the fundamental questions. Perhaps it is due to our own shyness, even fear, that many of us hold such awe for those who dare go intellectually where the rest of us are unwilling or incapable of going.

Speaking of awe and mysteries, there are a handful of places around the world that evoke both while capturing our collective imagination. If I mention the European Organisation for Nuclear Research, it may or may not mean anything to most of us. But, if I use the acronym CERN, many may find their thoughts transported to the world of Dan Brown, of scientific intrigue and where the impossible becomes possible.

Speaking still of mysteries, and of CERN, there is the question of antimatter, a material many of us may confuse with the similar sounding yet wildly different concepts of dark matter and dark energy. Beyond our own mysterious understanding of it, lie the antimatter mysteries that the scientists who study it reinvent and reinvigorate in their attempts to understand it.

As luck, or perhaps fate, would have it, one of them is Pakistani. Who better to demystify both what we know at the moment about antimatter and what working at CERN entails.

Demystifying antimatter

For Muhammad Sameed, a lifes yearning for understanding has led to a dream come true. The 32-year-old Islamabad native is among a mere handful of Pakistanis who would think themselves lucky for a chance to work at CERN. What is more in Sameeds case, he is physicist involved in studying antimatter particles at one of the worlds premier physics research organisations.

At CERN, Sameed is part of the ALPHA experiment, an acronym that stands for the Antihydrogen Laser Physics Apparatus. Just recently, the ALPHA collaboration effort succeeded in cooling down antihydrogen particles the simplest form of atomic antimatter with laser light.

Speaking with The Express Tribune, the young scientist began by admitting that the wider scientific community had perhaps contributed to some of the public misunderstandings about antimatter. I think it is us physicists fault for giving such similar names to such different concepts, he said when asked about the difference between antimatter, dark matter and dark energy.

Taking his own crack at remedying that, he explained: Before we explain antimatter, it is important remember what matter is, at the subatomic level.

Most of us learn about atoms and how they are made of electrons, protons and neutrons in school. But that is where the general awareness ends, said Sameed. If you look deeper, while electrons are fundamental particles they belong to a family of particles called leptons protons and neutrons are not. Those two are made up of two more kinds of fundamental particles: the quarks and the gluons, which bind them. He added that all matter that surrounds us and that we can interact with is made of particles from two families, namely quarks and leptons. Both families consist of six kinds of particles each.

According to Sameed, we have known all of this since the early part of the previous century, when quantum mechanics was developed. Where does antimatter come in? It was first articulated in a theoretical study by physicist Paul Dirac, he shared.

Dirac, while solving a quantum mechanics equation, arrived at two solutions, one positive and the other negative. The positive one corresponded to the electron. Dirac initially disregarded the negative solution, but later used it to hypothesise the existence of antielectrons, Sameed explained. He made that prediction in 1928, and just four years later, an American experiment actually discovered it.

How was the discovery made, you wonder? We have all these particles from outer space that pass through our planet, said Sameed. If we apply a magnetic field to them, we can determine which direction these particles turn in. If electrons turn to one side, particles with the opposite charge would turn in the other direction.

The physicist shared that since the discovery of the antielectron almost 90 years ago scientists have discovered an antimatter counterpart to each regular matter particle we know of. The story we physicists should be telling people is that not only is antimatter real, but that these are particles are found in nature, he said. The real question is this: we know from equations and experiments that when matter is produced in a lab or after the Big Bang an equal amount of antimatter is produced. So how is it that regular matter became so dominant in our universe and why is there so little antimatter occurring in nature?

According to Sameed, all research into antimatter at CERN and other organisations is focused on this question: What happened? Where did all the antimatter go? One proposed explanation, he shared, is that antimatter has some as yet unknown property that converts it into regular matter in unequal amounts. So by producing and trapping antimatter in a lab, we test it for various properties and whether those can explain what happened to most antimatter in nature. This has been the focus of research for the last 30 to 40 years.

Cooling with lasers

Explaining the recent ALPHA experiment with laser cooling, Sameed began by explaining the choice of antihydrogen. Hydrogen is the simplest atom we know of, with just one proton and one electron. Antihydrogen, similarly, is the simplest antiatom, he said.

You take an antiproton, get an antielectron to orbit it, and you should have an antihydrogen atom. But this is easier said than done, he explained. The main challenge with producing antihydrogen or any other antimatter particle is that if an anti-matter particle comes in contact with a regular matter particle, both are annihilated. So in order to capture anti-matter particles, you need to create perfect vacuum to ensure they dont come in contact with matter particles.

Sameed added that the challenge isnt just limited to creating vacuum either. You need to make sure that the container being used is designed in a way to ensure antimatter particles dont come in contact with its walls. This is done using electromagnetic fields.

Explaining how scientists study antimatter, Sameed began by explaining how regular matter particles would be studied. Take a regular hydrogen atom which is in what we call a ground state or normal state. If we shine a laser with a specific energy level onto that simple atom, its electron can jump into an excited state. He said that scientists have known about the effects of lasers on hydrogen atoms for a long time. We know what frequencies can excite it. For our experiment, we thought to test the same on anti-hydrogen atoms. We wondered if it would react differently to regular hydrogen due to differences in energy levels or other properties. Perhaps our findings could help unravel some of the mystery around why there is so little antimatter in the universe?

According to Sameed, the effects of lasers on antihydrogen were first tested in 2017. We shined a laser with the same frequency as the one that excites electrons in a regular hydrogen atom on to an antihydrogen atom. The results suggest the effect on antihydrogen was more or less the same, he said. But one side effect that we uncovered at the time and this had been predicted before the experiment was that laser light can cool particles.

Normally we use lasers to heat things, but if you shine a laser beam on an atom that is coming towards it, it has an effect of slowing down the atom, he added. So our current experiment was the first time we tested this laser cooling principle on antimatter.

Sameed further revealed that the next antimatter experiment being developed aims to study how it behaves under the influence of gravity from regular matter. We know how gravitational forces work between regular matter. Our equations suggest the same interaction would be true between two objects made of antimatter. But, we want to find out what happens in terms of gravity when there is an interaction between matter and antimatter. At the moment, we dont even have any strong theories to predict what will happen.

Easier said than done

When Sameed explains the experiment, one may get the false impression that it is as easy as pointing a laser towards antimatter. But nothing could be further from the truth.

For starters the laser we use is not the one used in laser pointers that most people know of. Ours is an ultraviolet laser, which is invisible to the naked eye and has a much higher energy level. It is not available commercially and is very difficult to manufacture, so we have to develop it in-house at CERN, he said. The laser in question is also absorbed by air particles, Sameed added. Not only must the beam travel through vacuum, it must be produced and aimed in vacuum as well.

According to Sameed, firing a laser at antihydrogen is a very different challenge that firing it at regular hydrogen. For normal hydrogen, we can shine the laser from any angle. But for antihydrogen, because the entire container is surrounded by special magnets to keep it from touching the walls, there is a very small access point for the laser itself.

Commercial possibilities

Any innovative technology can open up opportunities beyond what those who developed it are sometimes able to appreciate. Speaking on this aspect, Sameed said: We scientists develop such innovative solutions to satisfy pure curiosity and answer the fundamental questions about physics. But all research produces technological byproducts and sooner or later, they trickle down to R&D companies and eventually to wider society.

Asked if he could foresee any antimatter applications used beyond research, Sameed said there was one example already in medical imaging, even if it was difficult to predict wider uses. The PET or Positron Emission Tomography scanner. The positron is an anti-matter particle. It is essentially an anti-electron.

Beyond that, CERN in general is responsible for developing a wide-range of technologies. For instance, to trap anti-matter particles, we need strong magnetic fields. To produce those, we have to develop superconducting magnets which have various commercial uses as well. For instance, such magnets are used in hospitals in MRI scanners. A lot of technologies used in modern medicine were first developed in research centres like CERN, Sameed revealed.

Even on the software side, applications developed to track near-instant physical phenomenon have been co-opted by some financing trading companies which want to leverage the ability to process information in microseconds.

Sameed added that all CERN research is open-source and accessible to anyone in the world. You can contact our knowledge transfer centre and gain our research to use in reasonable ways.

The value of research

According to Sameed, the side effects of research for researchs sake are undeniable. We can see it over the last few centuries. The countries and regions that invested in fundamental research and technology, are the ones that are now global powers, he said. But we dont even need to look at it philosophically. Just look at the Internet.

Sameed pointed out that the World Wide Web was originally developed by CERN in the 1980s as a means to share information between physicists instantaneously. The intention at the time was only to aid research. But it was made open source, and the effect of that simple choice in reshaping life can be seen today.

Back to the future

How does one get to work at CERN? For Sameed, the yearning to become a scientist was sparked by one movie most of us watch and loved growing up. I was five or six when I watched Back to the Future. I dont think I understood much about the movie at the time, but I remember finding the character of Doc Brown fascinating, he said. I decided then that I would at least try to be a scientist when I grew up.

In terms of background, Sameed admits he had an ordinary middle-class upbringing. But I am lucky that my parents tried to provide me the best education possible, he said. Still, CERN was beyond his dreams till the time he graduated A-Levels.

For Sameed, the path to a career in science opened with his elder brothers academic pursuits. He went to the US on scholarship to study chemical and biological engineering. That made me realise that I too could get in-depth education in physics abroad. Like him, I applied for a scholarship and was able to study physics at Cornell University. That really set the stage for me.

The next chapter began when Sameed came across the example of another Pakistani who went on a summer internship to CERN. He made me aware that there was a programme international students could apply for. I applied, with no hope of getting in, and originally I was rejected. But, some time later, they contacted me again and told me that I was a better fit for another department, and here I am.

According to Sameed, he initially believed his recruiters at CERN had made an error. But once I got here, I realised that I was no different from other students, from Europe or US or elsewhere. We Pakistanis are in no way inferior to other countries in terms of talent. All that is missing is awareness.

When not busy participating in research, Sameed tries to do his part to raise awareness about opportunities for other Pakistanis at CERN. Pakistan is now an associate member at CERN and what that means is that any Pakistani can apply for any job here. This is not limited to just positions for scientists and engineers, and involves things like administration and legal affairs, etc.

Pakistan and CERN

According to Sameed, at the youth level, there is more awareness about CERN in Pakistan now. In fact, Pakistan has one of the highest number of applicants to CERN, he said. But most of these are just student positions at the moment and for higher level positions, we are still lagging behind. In terms of Pakistanis who are at CERN for the long term, there may be four or five.

Even so, Pakistanis appear to be highly valued at CERN, Sameed revealed. Pakistani engineers have made a huge contribution to CERN and they are a very well respected community here. They always trust Pakistanis who make it to CERN to do a good job, he said. There is also immense pride for Pakistanis at CERN due to Dr Abdus Salams contributions, both to physics as a whole and to CERN during the time he spent here. One of the streets is even named after him, he added.

Asked what advice he had for other young Pakistanis who choose a similar path, the physicist pointed out that there are many opportunities available, not just in CERN, but other high quality institutes. Not only are Pakistanis eligible to apply, but they are looking for people from diverse backgrounds with talent, he said. Even if you dont have the confidence, I would say apply. Because you might think youre not good enough, but the people who are recruiting may believe otherwise.

Sameed reiterated that when he joined CERN, he realised he was as good as anyone else. My hope for the future is that in addition to engineers, more Pakistani physicists will join as well.

Read the original post:

Where did the antimatter go? - The Express Tribune

Quantum Theory May Twist Cause And Effect Into Loops, With Effect Causing The Cause – ScienceAlert

Causalityis one of those difficult scientific topics that can easily stray into the realm ofphilosophy.

Science's relationship with the concept started out simply enough: an event causes another event later in time. That had been the standard understanding of the scientific community up until quantum mechanics was introduced.

Then, with the introduction of the famous "spooky action at a distance" that is a side effect of the concept ofquantum entanglement, scientists began to question that simple interpretation of causality.

Now, researchers at the Universit Libre de Bruxelles (ULB) and theUniversity of Oxfordhave come up with a theory that further challenges that standard view of causality as a linear progress from cause to effect.

In their new theoretical structure, cause and effect can sometimes take place in cycles, with the effect actually causing the cause.

Thequantum realmitself as it is currently understood is inherently messy.

There is no true understanding of things at that scale, which can be thought of better as a set of mathematical probabilities rather than actualities.These probabilities do not exactly lend themselves well to the idea of a definite cause and effect interaction between events either.

The researchers further muddied the waters using a tool known as aunitary transformation.

Simply put, a unitary transformation is a fudge used to solve some of the math that is necessary to understand complex quantum systems. Using it makes solving the famousSchrodinger equationachievable using real computers.

To give a more complete explanation requires delving a bit into the "space" that quantum mechanics operates in.

In quantum mechanics, time is simply another dimension that must be accounted for similarly to how the usual three dimensions of what we think of as linear space are accounted for. Physicists usually use another mathematical tool called aHamiltonianto solve Schrodinger's equation.

A Hamiltonian, though a mathematical concept, is often time-dependent. However, it is also the part of the equation that is changed when a unitary transformation is introduced.

As part of that action, it is possible to eliminate the time dependency of the Hamiltonian, to make it such that, instead of requiring time to go a certain direction (ie., for action and reaction to take place linearly), the model turns more into a circle than a straight line, with action causing reaction and reaction causing action.

If this isn't all confusing enough, there are some extremely difficult to conceive of implications of this model (and to be clear, from a macro level, it is just a model).

One important facet is that this finding has little to no relevance to everyday cause and effect.

The causes and effects that would be cyclical in this framework "are not local in spacetime", according to thepress releasefrom ULB, so they are unlikely to have any impact on day to day life.

Even if it doesn't have any everyday impact now, this framework could hint at acombined theoryof quantum mechanics and general relativity that has been the most sought after prize in physics for decades.

If that synthesis is ever fully realized, there will be more implications for everyday life than just the existential questions of whether we are actually in control of our own actions or not.

This article was originally published by Universe Today. Read the original article.

Continue reading here:

Quantum Theory May Twist Cause And Effect Into Loops, With Effect Causing The Cause - ScienceAlert

Quantum Mechanics, Free Will and the Game of Life – Scientific American

Before I get to the serious stuff, a quick story about John Conway, a.k.a. the mathematical magician. I met him in 1993 in Princeton while working on The Death of Proof. When I poked my head into his office, Conway was sitting with his back to me staring at a computer. Hair tumbled down his back, his sagging pants exposed his ass-cleft. His office overflowed with books, journals, food wrappers and paper polyhedrons, many dangling from the ceiling. When I tentatively announced myself, he yelled without turning, Whats your birthday! Uh, June 23, I said. Year! Conway shouted. Year! 1953, I replied. After a split second he blurted out, Tuesday! He tapped his keyboard, stared at the screen and exulted, Yes! Finally facing me, Conway explained that he belongs to a group of people who calculate the day of the week of any date, past or present, as quickly as possible. He, Conway informed me with a manic grin, is one of the worlds fastest day-of-the-week calculators.

This encounter came back to me recently as I read a wonderful New York Times tribute to Conway, felled by COVID-19 last year at the age of 82. The Times focuses on the enduring influence of the Game of Life, a cellular automaton invented by Conway more than a half century ago. Scientific Americans legendary math columnist Martin Gardner introduced the Game of Life, sometimes just called Life, to the world in 1970 after receiving a letter about it from Conway. The Times riff on Life got me thinking anew about old riddles. Like, Does free will exist?

Some background. A cellular automaton is a grid of cells whose states depend on the states of neighboring cells, as determined by preset rules. The Game of Life is a two-dimensional cellular automaton with square cells that can be in one of two states, alive or dead (often represented by black or white). *A given cells state depends on the state of its eight immediate neighbors. A dead cell comes to life if three of its neighbors are alive, and a live cell stays alive if two or three of its neighbors are alive. Otherwise, the cell dies or remains dead. So simple!* And yet Life, when the rules are applied over and over, ideally by a computer, yields endlessly varied patterns, including quasianimated clusters of cells known as longboats, gliders, spaceships and my favorite, Speed Demonoids.

Like the Mandelbrot set, the famous fractal icon, the Game of Life inspired the fields of chaos and complexity, which are so similar that I lump them together under a single term: chaoplexity. Chaoplexologists assume that just as Lifes odd digital fauna and flora stem from straightforward rules, so do many real-world things. With the help of computer simulations, chaoplexologists hoped to discover the rules, or algorithms, underpinning stuff that has long resisted conventional scientific analysis, from immune systems and brains to stock markets and whole civilizations. (The big data movement has recycled the hope, and hype, of chaoplexology.)

Of course, the Game of Life can be interpreted in different ways. It resembles a digital, animated Rorschach test upon which scholars project their biases. For example, philosopher Daniel Dennett, commenting on Conways invention in the Times, points out that Lifes higher-order patterns emerge from processes that are completely unmysterious and explicable.... No psionic fields, no morphic resonances, no lan vital, no dualism.

Dennetts comment annoyed me at first; Life just gives him an excuse to reiterate his defense of hard-core materialism. But Life, Dennett goes on to say, shows that deterministic rules can generate complex adaptively appropriate structures capable of action and control. Yes! I thought, my own bias coming into play. Dennett clearly means that deterministic processes can spawn phenomena that transcend determinism, like minds with free will.

Then another thought occurred to me, inspired by my ongoing effort to understand quantum mechanics. Conventional cellular automata, including Life, are strictly local, in the sense that what happens in one cell depends on what happens in its neighboring cells. But quantum mechanics suggests that nature seethes with nonlocal spooky actions. Remote, apparently disconnected things can be entangled, influencing each other in mysterious ways, as if via the filaments of ghostly, hyperdimensional cobwebs.

I wondered: Can cellular automata incorporate nonlocal entanglements? And if so, might these cellular automata provide even more support for free will than the Game of Life? Google gave me tentative answers. Yes, researchers have created many cellular automata that incorporate quantum effects, including nonlocality. There are even quantum versions of the Game of Life. But, predictably, experts disagree on whether nonlocal cellular automata bolster the case for free will.

One prominent explorer of quantum cellular automata, Nobel laureate Gerard t Hooft, flatly rules out the possibility of free will. In his 2015 monograph The Cellular Automaton Interpretation of Quantum Mechanics, t Hooft argues that some annoying features of quantum mechanicsnotably its inability to specify precisely where an electron will be when we observe itcan be eliminated by reconfiguring the theory as a cellular automaton. t Hoofts model assumes the existence of hidden variables underlying apparently random quantum behavior. His model leads him to a position called superdeterminism, which eliminates (as far as I can tell; t Hoofts arguments arent easy for me to follow) any hope for free will. Our fates are fixed from the big bang on.

Another authority on cellular automata, Stephen Wolfram, creator of Mathematica and other popular mathematical programs, proposes that free will is possible. In his 2002 opus A New Kind of Science, Wolfram argues that cellular automata can solve many scientific and philosophical puzzles, including free will. He notes that many cellular automata, including the Game of Life, display the property of computational irreducibility. That is, you cannot predict in advance what the cellular automata are going to do, you can only watch and see what happens. This unpredictability is compatible with free will, or so Wolfram suggests.

John Conway, Lifes creator, also defended free will. In a 2009 paper, The Strong Free Will Theorem, Conway and Simon Kochen argue that quantum mechanics, plus relativity, provide grounds for belief in free will. At the heart of their argument is a thought experiment in which physicists measure the spin of particles. According to Conway and Kochen, the physicists are free to measure the particles in dozens of ways, which are not dictated by the preceding state of the universe. Similarly, the particles spin, as measured by the physicists, is not predetermined.

Their analysis leads Conway and Kochen to conclude that the physicists possess free willand so do the particles they are measuring. Our provocative ascription of free will to elementary particles is deliberate, Conway and Kochen write, since our theorem asserts that if experimenters have a certain freedom, then particles have exactly the same kind of freedom. That last part, which ascribes free will to particles, threw me at first; it sounded too woo. Then I recalled that prominent scientists are advocating panpsychism, the idea that consciousness pervades all matter, not just brains. If we grant electrons consciousness, why not give them free will, too?

To be honest, I have a problem with all these treatments of free will, pro and con. They examine free will within the narrow, reductionistic framework of physics and mathematics, and they equate free will with randomness and unpredictability. My choices, at least important ones, are not random, and they are all too predictable, at least for those who know me.

For example, here I am arguing for free will once again. I do so not because physical processes in my brain compel me to do so. I defend free will because the idea of free will matters to me, and I want it to matter to others. I am committed to free will for philosophical, ethical and even political reasons. I believe, for example, that deterministic views of human nature make us more likely to accept sexism, racism and militarism. No physics modelnot even the most complex, nonlocal cellular automaton--can capture my rational and, yes, emotional motives for believing in free will, but that doesnt mean these motives lack causal power.

Just as it cannot prove or disprove Gods existence, science will never decisively confirm or deny free will. In fact, t Hooft might be right. I might be just a mortal, 3-D, analog version of the Speed Demonoid, plodding from square to square, my thoughts and actions dictated by hidden, superdeterministic rules far beyond my ken. But I cant accept that grim worldview. Without free will, life lacks meaning, and hope. Especially in dark times, my faith in free will consoles me, and makes me feel less bullied by the deadly Game of Life.

Further Reading:

I obsess over free will and related riddles in my two most recent books: Pay Attention: Sex, Death, and Science, and Mind-Body Problems: Science, Subjectivity & Who We Really Are.

*Editor's note: The passage between the asterisks was revised because the description in the original version was incorrect.

The rest is here:

Quantum Mechanics, Free Will and the Game of Life - Scientific American

Quantum Theory Proposes That Cause and Effect Can Go In Loops – Universe Today

Causality is one of those difficult scientific topics that can easily stray into the realm of philosophy. Sciences relationship with the concept started out simply enough: an event causes another event later in time. That had been the standard understanding of the scientific community up until quantum mechanics was introduced. Then, with the introduction of the famous spooky action at a distance that is a side effect of the concept of quantum entanglement, scientists began to question that simple interpretation of causality.

Now, researchers at the Universit Libre de Bruxelles (ULB) and the University of Oxford have come up with a theory that further challenges that standard view of causality as a linear progress from cause to effect. In their new theoretical structure, cause and effect can sometimes take place in cycles, with the effect actually causing the cause.

The quantum realm itself as it is currently understood is inherently messy. There is no true understanding of things at that scale, which can be thought of better as a set of mathematical probabilities rather than actualities. These probabilities do not exactly lend themselves well to the idea of a definite cause and effect interaction between events either.

The researchers further muddied the waters using a tool known as a unitary transformation. Simply put, a unitary transformation is a fudge used to solve some of the math that is necessary to understand complex quantum systems. Using it makes solving the famous Schrodinger equation achievable using real computers.

To give a more complete explanation requires delving a bit into the space that quantum mechanics operates in. In quantum mechanics, time is simply another dimension that must be accounted for similarly to how the usual three dimensions of what we think of as linear space are accounted for. Physicists usually use another mathematical tool called a Hamiltonian to solve Schrodingers equation.

A Hamiltonian, though a mathematical concept, is often time dependent. However, it is also the part of the equation that is changed when a unitary transformation is introduced. As part of that action, it is possible to eliminate the time dependency of the Hamiltonian, to make it such that, instead of requiring time to go a certain direction (i.e. for action and reaction to take place linearly), the model turns more into a circle than a straight line, with action causing reaction and reaction causing action.

If this isnt all confusing enough, there are some extremely difficult to conceive of implications of this model (and to be clear, from a macro level, it is just a model). One important facet is that this finding has little to no relevance to every day cause and effect. The causes and effects that would be cyclical in this framework are not local in spacetime, according to the press release from ULB, so they are unlikely to have any impact on day to day life.

Even if it doesnt have any everyday impact now, this framework could hint at a combined theory of quantum mechanics and general relativity that has been the most sought after prize in physics for decades. If that synthesis is ever fully realized, there will be more implications for everyday life than just the existential questions of whether we are actually in control of our own actions or not.

Learn More:Eureka Alert: Quantum Causal LoopsNature Communications: Cyclic Quantum Causal ModelsFlorida News Times: Quantum Causal LoopUT: The three-body problem shows us why we cant accurately calculate the past

Lead Image:Artist depiction of quantum causal loopsCredit: ULB

Like Loading...

Read the original here:

Quantum Theory Proposes That Cause and Effect Can Go In Loops - Universe Today

Extracting information stored in 100,000 nuclear quantum bits – Advanced Science News

Researchers were able to detect a "needle" of highly fragile quantum information in a "haystack" of nuclei.

Image credit: Getty Images/iStockphoto

Researchers have found a way to use light and a single electron to communicate with a cloud of quantum bits and sense their behaviour, making it possible to detect a single quantum bit in a dense cloud.

The researchers, from the University of Cambridge, were able to inject a needle of highly fragile quantum information in a haystack of 100,000 nuclei. Using lasers to control an electron, the researchers could then use that electron to control the behaviour of the haystack, making it easier to find the needle. They were able to detect the needle with a precision of 1.9 parts per million: high enough to detect a single quantum bit in this large ensemble.

The technique makes it possible to send highly fragile quantum information optically to a nuclear system for storage, and to verify its imprint with minimal disturbance, an important step in the development of a quantum internet based on quantum light sources. The results are reported in the journal Nature Physics.

The first quantum computers which will harness the strange behaviour of subatomic particles to far outperform even the most powerful supercomputers are on the horizon. However, leveraging their full potential will require a way to network them: a quantum internet. Channels of light that transmit quantum information are promising candidates for a quantum internet, and currently there is no better quantum light source than the semiconductor quantum dot: tiny crystals that are essentially artificial atoms.

However, one thing stands in the way of quantum dots and a quantum internet: the ability to store quantum information temporarily at staging posts along the network.

The solution to this problem is to store the fragile quantum information by hiding it in the cloud of 100,000 atomic nuclei that each quantum dot contains, like a needle in a haystack, said Professor Mete Atatre from Cambridges Cavendish Laboratory, who led the research. But if we try to communicate with these nuclei like we communicate with bits, they tend to flip randomly, creating a noisy system.

The cloud of quantum bits contained in a quantum dot dont normally act in a collective state, making it a challenge to get information in or out of them. However, Atatre and his colleagues showed in 2019 that when cooled to ultra-low temperatures also using light, these nuclei can be made to do quantum dances in unison, significantly reducing the amount of noise in the system.

Now, they have shown another fundamental step towards storing and retrieving quantum information in the nuclei. By controlling the collective state of the 100,000 nuclei, they were able to detect the existence of the quantum information as a flipped quantum bit at an ultra-high precision of 1.9 parts per million: enough to see a single bit flip in the cloud of nuclei.

Technically this is extremely demanding, said Atatre, who is also a Fellow of St Johns College. We dont have a way of talking to the cloud and the cloud doesnt have a way of talking to us. But what we can talk to is an electron: we can communicate with it sort of like a dog that herds sheep.

Using the light from a laser, the researchers are able to communicate with an electron, which then communicates with the spins, or inherent angular momentum, of the nuclei.

By talking to the electron, the chaotic ensemble of spins starts to cool down and rally around the shepherding electron; out of this more ordered state, the electron can create spin waves in the nuclei.

If we imagine our cloud of spins as a herd of 100,000 sheep moving randomly, one sheep suddenly changing direction is hard to see, said Atatre. But if the entire herd is moving as a well-defined wave, then a single sheep changing direction becomes highly noticeable.

In other words, injecting a spin wave made of a single nuclear spin flip into the ensemble makes it easier to detect a single nuclear spin flip among 100,000 nuclear spins.

Using this technique, the researchers are able to send information to the quantum bit and listen in on what the spins are saying with minimal disturbance, down to the fundamental limit set by quantum mechanics.

Having harnessed this control and sensing capability over this large ensemble of nuclei, our next step will be to demonstrate the storage and retrieval of an arbitrary quantum bit from the nuclear spin register, said co-first author Daniel Jackson, a PhD student at the Cavendish Laboratory.

This step will complete a quantum memory connected to light a major building block on the road to realizing the quantum internet, said co-first author Dorian Gangloff, a Research Fellow at St Johns College.

Besides its potential usage for a future quantum internet, the technique could also be useful in the development of solid-state quantum computing.

Reference: D. M. Jackson, et al, Quantum sensing of a coherent single spin excitation in a nuclear ensemble, Nature Physics (2021). DOI: 10.1038/s41567-020-01161-4

Read the original here:

Extracting information stored in 100,000 nuclear quantum bits - Advanced Science News

The search for dark matter gets a speed boost from quantum technology – The Conversation US

Nearly a century after dark matter was first proposed to explain the motion of galaxy clusters, physicists still have no idea what its made of.

Researchers around the world have built dozens of detectors in hopes of discovering dark matter. As a graduate student, I helped design and operate one of these detectors, aptly named HAYSTAC. But despite decades of experimental effort, scientists have yet to identify the dark matter particle.

Now, the search for dark matter has received an unlikely assist from technology used in quantum computing research. In a new paper published in the journal Nature, my colleagues on the HAYSTAC team and I describe how we used a bit of quantum trickery to double the rate at which our detector can search for dark matter. Our result adds a much-needed speed boost to the hunt for this mysterious particle.

There is compelling evidence from astrophysics and cosmology that an unknown substance called dark matter constitutes more than 80% of the matter in the universe. Theoretical physicists have proposed dozens of new fundamental particles that could explain dark matter. But to determine which if any of these theories is correct, researchers need to build different detectors to test each one.

One prominent theory proposes that dark matter is made of as-yet hypothetical particles called axions that collectively behave like an invisible wave oscillating at a very specific frequency through the cosmos. Axion detectors including HAYSTAC work something like radio receivers, but instead of converting radio waves to sound waves, they aim to convert axion waves into electromagnetic waves. Specifically, axion detectors measure two quantities called electromagnetic field quadratures. These quadratures are two distinct kinds of oscillation in the electromagnetic wave that would be produced if axions exist.

The main challenge in the search for axions is that nobody knows the frequency of the hypothetical axion wave. Imagine youre in an unfamiliar city searching for a particular radio station by working your way through the FM band one frequency at a time. Axion hunters do much the same thing: They tune their detectors over a wide range of frequencies in discrete steps. Each step can cover only a very small range of possible axion frequencies. This small range is the bandwidth of the detector.

Tuning a radio typically involves pausing for a few seconds at each step to see if youve found the station youre looking for. Thats harder if the signal is weak and theres a lot of static. An axion signal in even the most sensitive detectors would be extraordinarily faint compared with static from random electromagnetic fluctuations, which physicists call noise. The more noise there is, the longer the detector must sit at each tuning step to listen for an axion signal.

Unfortunately, researchers cant count on picking up the axion broadcast after a few dozen turns of the radio dial. An FM radio tunes from only 88 to 108 megahertz (one megahertz is one million hertz). The axion frequency, by contrast, may be anywhere between 300 hertz and 300 billion hertz. At the rate todays detectors are going, finding the axion or proving that it doesnt exist could take more than 10,000 years.

On the HAYSTAC team, we dont have that kind of patience. So in 2012 we set out to speed up the axion search by doing everything possible to reduce noise. But by 2017 we found ourselves running up against a fundamental minimum noise limit because of a law of quantum physics known as the uncertainty principle.

The uncertainty principle states that it is impossible to know the exact values of certain physical quantities simultaneously for instance, you cant know both the position and the momentum of a particle at the same time. Recall that axion detectors search for the axion by measuring two quadratures those specific kinds of electromagnetic field oscillations. The uncertainty principle prohibits precise knowledge of both quadratures by adding a minimum amount of noise to the quadrature oscillations.

In conventional axion detectors, the quantum noise from the uncertainty principle obscures both quadratures equally. This noise cant be eliminated, but with the right tools it can be controlled. Our team worked out a way to shuffle around the quantum noise in the HAYSTAC detector, reducing its effect on one quadrature while increasing its effect on the other. This noise manipulation technique is called quantum squeezing.

In an effort led by graduate students Kelly Backes and Dan Palken, the HAYSTAC team took on the challenge of implementing squeezing in our detector, using superconducting circuit technology borrowed from quantum computing research. General-purpose quantum computers remain a long way off, but our new paper shows that this squeezing technology can immediately speed up the search for dark matter.

Our team succeeded in squeezing the noise in the HAYSTAC detector. But how did we use this to speed up the axion search?

Quantum squeezing doesnt reduce the noise uniformly across the axion detector bandwidth. Instead, it has the largest effect at the edges. Imagine you tune your radio to 88.3 megahertz, but the station you want is actually at 88.1. With quantum squeezing, you would be able to hear your favorite song playing one station away.

In the world of radio broadcasting this would be a recipe for disaster, because different stations would interfere with one another. But with only one dark matter signal to look for, a wider bandwidth allows physicists to search faster by covering more frequencies at once. In our latest result we used squeezing to double the bandwidth of HAYSTAC, allowing us to search for axions twice as fast as we could before.

Quantum squeezing alone isnt enough to scan through every possible axion frequency in a reasonable time. But doubling the scan rate is a big step in the right direction, and we believe further improvements to our quantum squeezing system may enable us to scan 10 times faster.

Nobody knows whether axions exist or whether they will resolve the mystery of dark matter; but thanks to this unexpected application of quantum technology, were one step closer to answering these questions.

See the original post here:

The search for dark matter gets a speed boost from quantum technology - The Conversation US

Microsofts Big Win in Quantum Computing Was an Error After All – WIRED

Whatever happened, the Majorana drama is a setback for Microsofts ambitions to compete in quantum computing. Leading computing companies say the technology will define the future by enabling new breakthroughs in science and engineering.

Quantum computers are built from devices called qubits that encode 1s and 0s of data but can also use a quantum state called a superposition to perform math tricks not possible for the bits in a conventional computer. The main challenge to commercializing that idea is that quantum states are delicate and easily quashed by thermal or electromagnetic noise, making qubits error-prone.

Google, IBM, and Intel have all shown off prototype quantum processors with around 50 qubits, and companies including Goldman Sachs and Merck are testing the technology. But thousands or millions of qubits are likely required for useful work. Much of a quantum computers power would probably have to be dedicated to correcting its own glitches.

Microsoft has taken a different approach, claiming qubits based on Majorana particles will be more scalable, allowing it to leap ahead. But after more than a decade of work, it does not have a single qubit.

From the fuller data, theres no doubt that theres no Majorana.

Sergey Frolov, University of Pittsburgh

Majorana fermions are named after Italian physicist Ettore Majorana, who hypothesized in 1937 that particles should exist with the odd property of being their own antiparticles. Not long after, he boarded a ship and was never seen again. Physicists wouldnt report a good glimpse of one of his eponymous particles until the next millennium, in Kouwenhovens lab.

Microsoft got interested in Majoranas after company researchers in 2004 approached tech strategy chief Craig Mundie and said they had a way to solve one problem holding back quantum computersqubits flakiness.

The researchers seized on theoretical physics papers suggesting a way to build qubits that would make them more dependable. These so-called topological qubits would be built around unusual particles, of which Majorana particles are one example, that can pop into existence in clumps of electrons inside certain materials at very low temperatures.

Microsoft created a new team of physicists and mathematicians to flesh out the theory and practice of topological quantum computing, centered on an outpost in Santa Barbara, California, christened Station Q. They collaborated with and funded leading experimental physicists hunting for the particles needed to build this new form of qubit.

Kouwenhoven, in Delft, was one of the physicists who got Microsofts backing. His 2012 paper reporting signatures of Majorana particles inside nanowires started chatter about a future Nobel prize for proving the elusive particles existence. In 2016, Microsoft stepped up its investmentand the hype.

Everything you ever wanted to know about qubits, superpositioning, and spooky action at a distance.

Kouwenhoven and another leading physicist, Charles Marcus, at the University of Copenhagen were hired as corporate Majorana hunters. The plan was to first detect the particles and then invent more complex devices that could control them and function as qubits. Todd Holmdahl, who previously led hardware for Microsofts lucrative Xbox games console, took over as leader of the topological quantum computing project. Early in 2018, he told Barrons he would have a topological qubit by the end of the year. The now-disputed paper appeared a month later.

While Microsoft sought Majoranas, competitors working on established qubit technologies reported steady progress. In 2019, Google announced it had reached a milestone called quantum supremacy, showing that a chip with 53 qubits could perform a statistical calculation in minutes that would take a supercomputer millennia. Soon after, Microsoft appeared to hedge its quantum bet, announcing it would offer access to quantum hardware from other companies via its cloud service Azure. The Wall Street Journal reported that Holmdahl left the project that year after missing an internal deadline.

Microsoft has been quieter about its expected pace of progress on quantum hardware since Holmdahl's departure. Competitors in quantum computing continue to tout hardware advances and urge software developers to access prototypes over the internet, but none appear close to creating a quantum computer ready for prime time.

Continue reading here:

Microsofts Big Win in Quantum Computing Was an Error After All - WIRED

New EU Consortium shaping the future of Quantum Computing USA – PRNewswire

Europe has always been excellent in academic research, but over the past few decades commercializing research projects has been slow compared to international competition. This is starting to change with quantum technologies. As one of the largest efforts in Europe and worldwide, Germany announced 2 Billion funding into quantum programs in June 2020, from which 120 Million are invested in this current round of research grants.

Today, IQM announced a Quantum project consortium that includes Europe's leading startups (ParityQC, IQM), industry leaders (Infineon Technologies), research centers (Forschungszentrum Jlich),supercomputing centers (Leibniz Supercomputing Centre), and academia (Freie Universitt Berlin) has been awarded 12.4 Million from the German Ministry of Education and Research (BMBF) (Announcement in German).

The scope of the project is to accelerate commercialization through an innovative co-design concept. This project focuses on application-specific quantum processors, which have the potential to create a fastlane to quantum advantage. The digital-analog concept used to operate the processors will further lay the foundation for commercially viable quantum computers. This project will run for four years and aims to develop a 54-qubit quantum processor.

The project is intended to support the European FET Flagship project EU OpenSuperQ, announced in 2018 which is aimed at designing, building, and operating a quantum information processing system of up to 100 qubits. Deploying digital-analog quantum computing, this consortium adds a new angle to the OpenSuperQ project and widens its scope. With efforts from Munich, Berlin and Jlich, as well as Parity QC from Austria, the project builds bridges and seamlessly integrates into the European quantum landscape.

"The grant from the Federal Ministry of Education and Research of Germanyis a huge recognition of our unique co-design approach for quantum computers. Last year when we established our office in Munich, this was one of our key objectives. The concept allows us to become a system integrator for full-stack quantum computers by bringing together all the relevant players. As Europe's leading startup in quantum technologies, this gives us confidence to further invest in Germany and other European countries" said Dr. Jan Goetz, CEO of IQM Quantum Computers.

As European technology leader, Germany is taking several steps to lead the quantum technology race. An important role of such leadership is to bring together the European startups, industry, research and academic partners. This project will give the quantum landscape in Germany an accelerated push and will create a vibrant quantum ecosystem in the region for the future.

Additional Quotes:

"DAQC is an important project for Germany and Europe. It enables us to take a leading role in the area of quantum technologies. It also allows us to bring quantum computing into one of the prime academic supercomputing centres to more effectively work on the important integration of high-performance computing and quantum computing. We are looking forward to a successful collaboration," said Prof. DrMartinSchulz, Member of the Board of Directors, Leibniz Supercomputing Centre (LRZ).

"The path towards scalable and fully programmable quantum computing will be the parallelizability of gates and building with reduced complexity in order to ensure manageable qubit control. Our ParityQC architecture is the blueprint for a fully parallelizable quantum computer, which comes with the associated ParityOS operating system. With the team of extraordinary members of the DAQC consortium this will allow us to tackle the most pressing and complex industry-relevant optimization problems." saidMagdalena Hauser & Wolfgang Lechner, CEOs & Co-founder ParityQC

"We are looking forward to exploring and realizing a tight connection between hardware and applications, and having DAQC quantum computers as a compatible alternative within the OpenSuperQ laboratory. Collaborations like this across different states, and including both public and private partners, have the right momentum to move quantum computing in Germany forward." saidProf. Frank Wilhelm-Mauch, Director, Institute for Quantum Computing Analytics, Forschungszentrum Jlich

"At Infineon, we are looking forward to collaborating with top-class scientists and leading start-ups in the field of quantum computing in Europe. We must act now if we in Germany and Europe do not want to become solely dependent on American or Asian know-how in this future technology area. We are very glad to be part of this highly innovative project and happy to contribute with our expertise in scaling and manufacturing processes." saidDr.Sebastian Luber, Senior Director Technology & Innovation, Infineon Technologies AG

"This is a hugely exciting project. It is a chance of Europe and Germany to catch up in the development of superconducting quantum computers. I am looking forward to adventures on understanding how such machines can be certified in their precise functioning." said Prof.Jens Eisert, Professor of Quantum Physics, Freie Universitt Berlin

About IQM Quantum Computers:

IQM is the European leader in superconducting quantum computers, headquartered in Espoo, Finland. Since its inception in 2018, IQM has grown to 80+ employees and has also established a subsidiary in Munich, Germany, to lead the co-design approach. IQM delivers on-premises quantum computers for research laboratories and supercomputing centers and provides complete access to its hardware. For industrial customers, IQM delivers quantum advantage through a unique application-specific co-design approach. IQM has raised 71 Million from VCs firms and also public grants and is also building Finland's first quantum computer.

For more information, visit http://www.meetiqm.com.

Registered offices:

IQM Finland OyKeilaranta 1902150 EspooFINLANDwww.meetiqm.com

IQM GERMANY GmbHNymphenburgerstr. 8680636 MnchenGermany

IQM: Facts and Figures

Founders:

Media Contact: Raghunath Koduvayur, Head of Marketing and Communications, [emailprotected], +358504876509

Photo - https://mma.prnewswire.com/media/1437806/IQM_Quantum_Computers_Founders.jpg Photo - https://mma.prnewswire.com/media/1437807/IQM_Quantum_computer_design.jpg Logo - https://mma.prnewswire.com/media/1121497/IQM_Logo.jpg

SOURCE IQM Finland Oy

http://meetiqm.com/contact/

More here:

New EU Consortium shaping the future of Quantum Computing USA - PRNewswire

Mutually unbiased bases and symmetric informationally complete measurements in Bell experiments – Science Advances

INTRODUCTION

Measurements are crucial and compelling processes at the heart of quantum physics. Quantum measurements, in their diverse shapes and forms, constitute the bridge between the abstract formulation of quantum theory and concrete data produced in laboratories. Crucially, the quantum formalism of measurement processes gives rise to experimental statistics that elude classical models. Therefore, appropriate measurements are indispensable for harvesting and revealing quantum phenomena. Sophisticated manipulation of quantum measurements is both at the heart of the most well-known features of quantum theory such as contextuality (1) and the violation of Bell inequalities (2) as well as its most groundbreaking applications such as quantum cryptography (3) and quantum computation (4). In the broad landscape of quantum measurements (5), certain classes of measurements are outstanding because of their breadth of relevance in foundations of quantum theory and applications in quantum information processing.

Two widely celebrated, intensively studied, and broadly useful classes of measurements are known as mutually unbiased bases (MUBs) and symmetric informationally complete measurements (SICs). Two measurements are said to be mutually unbiased if by preparing any eigenstate of the first measurement and then performing the second measurement, one finds that all outcomes are equally likely (6). A typical example of MUBs corresponds to measuring two perpendicular components of the polarization of a photon. A SIC is a quantum measurement with the largest number of possible outcomes such that all measurement operators have equal magnitude overlaps (7, 8). Thus, the former is a relationship between two different measurements, whereas the latter is a relationship within a single measurement. Since MUBs and SICs are both conceptually natural, elegant, and (as it turns out) practically important classes of measurements, they are often studied in the same context (914). Let us briefly review their importance to foundational and applied aspects of quantum theory.

MUBs are central to the concept of complementarity in quantum theory, i.e., how the knowledge of one quantity limits (or erases) the knowledge of another quantity [see, e.g., (15) for a review of MUBs]. This is often highlighted through variants of the famous Stern-Gerlach experiment in which different Pauli observables are applied to a qubit. For instance, after first measuring (say) x, we know whether our system points up or down the x axis. If we then measure z, our knowledge of the outcome of yet another x measurement is entirely erased since z and x are MUBs. This phenomenon leads to an inherent uncertainty for the outcomes of MUB measurements on all quantum states, which can be formalized in terms of entropic quantities, leading to so-called entropic uncertainty relations. It is then natural that MUBs give rise to the strongest entropic uncertainties in quantum theory (16). Moreover, MUBs play a prominent role in quantum cryptography, where they are used in many of the most well-known quantum key distribution protocols (1721) and in secret sharing protocols (2224). Their appeal to cryptography stems from the idea that eavesdroppers who measure an eigenstate of one basis in another basis unbiased to it obtain no useful information, while they also induce a large disturbance in the state that allows their presence to be detected. Furthermore, complete (i.e., largest possible in a given dimension) sets of MUBs are tomographically complete, and their symmetric properties make them pivotal for quantum state tomography (25, 26). In addition, MUBs are useful for a range of other problems such as quantum random access coding (2731), quantum error correction (32, 33), and entanglement detection (34). This broad scope of relevance has motivated much effort toward determining the largest number of MUBs that exist in general Hilbert space dimensions (15).

The motivations behind the study of SICs are quite similar to the ones discussed for MUBs. It has been shown that SICs are natural measurements for quantum state tomography (35), which has also prompted several experimental realizations of SICs (3638). In addition, some protocols for quantum key distribution derive their success directly from the defining properties of SICs (39, 40), which have also been experimentally demonstrated (41). Furthermore, a key property of SICs is that they have the largest number of outcomes possible while still being extremal measurements, i.e., they cannot be simulated by stochastically implementing other measurements. This gives SICs a central role in a range of applications, which include random number generation from entangled qubits (42), certification of nonprojective measurements (4346), semidevice-independent self-testing (45), and entanglement detection (47, 48). Moreover, SICs have a key role in quantum Bayesianism (49), and they exhibit interesting connections to several areas of mathematics, for instance, Lie and Jordan algebras (50) and algebraic number theory (51). Because of their broad interest, much research effort has been directed toward proving the existence of SICs in all Hilbert space dimensions (presently known, at least, up to dimension 193) (7, 8, 5255). See, e.g., (54) for a recent review of SICs.

In this work, we broadly investigate MUBs and SICs in the context of Bell nonlocality experiments. In these experiments, two separated observers perform measurements on entangled quantum systems that can produce nonlocal correlations that elude any local hidden variable model (56). In recent years, Bell inequalities have played a key role in the rise of device-independent quantum information processing where they are used to certify properties of quantum systems. Naturally, certification of a physical property can be achieved under different assumptions of varying strength. Device-independent approaches offer the strongest form of certification since the only assumptions made are space-like separation and the validity of quantum theory. The advent of device-independent quantum information processing has revived interest in Bell inequalities, as these can now be tailored to the purpose of certifying useful resources for quantum information processing. The primary focus of such certification has been on various types of entangled states (57). However, quantum measurements are equally important building blocks for quantum information processing. Nevertheless, our understanding of which arrangements of high-dimensional measurements can be certified in a device-independent manner is highly limited. We speak of arrangements of measurements because for a single measurement (acting on a quantum system with no internal structure), no interesting property can be certified. The task becomes nontrivial when at least two measurements are present and we can certify the relation between them. The simplest approach relies on combining known self-testing results for two-qubit systems, which allows us to certify high-dimensional measurements constructed out of qubit building blocks (58, 59). Alternatively, device-independent certification of high-dimensional structures can be proven from scratch, but to the best of our knowledge, only two results of this type have been proven: (i) a triple of MUBs in dimension three (60) and (ii) the measurements conjectured to be optimal for the Collins-Gisin-Linden-Massar-Popescu Bell inequality (the former is a single result, while the latter is a family parameterized by the dimension d 2) (61). None of these results can be used to certify MUBs in dimension d 4.

Since mutual unbiasedness and symmetric informational completeness are natural and broadly important concepts in quantum theory, they are prime candidates of interest for such certification in general Hilbert space dimensions. This challenge is increasingly relevant because of the broader experimental advances toward high-dimensional systems along the frontier of quantum information theory. This is also reflected in the fact that recent experimental implementations of MUBs and SICs can go well beyond the few lowest Hilbert space dimensions (38, 41, 62).

Focusing on mutual unbiasedness and symmetric informational completeness, we solve the above challenges. To this end, we first construct Bell inequalities that are maximally violated using a maximally entangled state of local dimension d and, respectively, a pair of d-dimensional MUBs and a d-dimensional SIC. In the case of MUBs, we show that the maximal quantum violation of the proposed Bell inequality device independently certifies that the measurements satisfy an operational definition of mutual unbiasedness as well as that the shared state is essentially a maximally entangled state of local dimension d. Similarly, in the case of SICs, we find that the maximal quantum violation device independently certifies that the measurements satisfy an analogous operational definition of symmetric informational completeness. Moreover, we also show that our Bell inequalities are useful in two practically relevant tasks. For the case of MUBs, we consider a scheme for device-independent quantum key distribution and prove a key rate of log d bits, which is optimal for any protocol that extracts key from a d-outcome measurement. For SICs, we construct a scheme for device-independent random number generation. For two-dimensional SICs, we obtain the largest amount of randomness possible for any protocol based on qubits. For three-dimensional SICs, we obtain more randomness than can be obtained in any protocol based on projective measurements and quantum systems of dimension up to seven. For low dimensions, we numerically show that both protocols are robust to noise, which is imperative to any experiment. The implementation of these two protocols involves performing a Bell-type experiment, estimating the outcome statistics and computing the resulting Bell inequality violation. The efficiency and security of the protocol is then deduced only from the observed Bell inequality violation, i.e., it does not require a complete characterization of the devices. Device-independent protocols can, in principle, be implemented on any experimental platform suitable for Bell nonlocality experiments, such as entangled spins (63), entangled photons (64, 65), and entangled atoms (66).

The task of finding Bell inequalities that are maximally violated by MUBs for d 3 has been attempted several times (6770) but with limited success. The only convincing candidate is the inequality corresponding to d = 3 studied in (67), and even then, there is only numerical evidence (no analytical proof is known). Some progress has been made in (60), which considers the case of prime d and proposes a family of Bell inequalities maximally violated by a specific set of d MUBs in dimension d. These inequalities, however, have two drawbacks: (i) There is no generalization to the case of nonprime d, and (ii) even for the case of prime d, we have no characterization of the quantum realizations that achieve the maximal violation.

In this work, we present a family of Bell inequalities in which the maximal quantum violation is achieved with a maximally entangled state and any pair of d-dimensional MUBs. These Bell inequalities have been constructed so that their maximal quantum violation can be computed analytically, which then enables us to obtain a detailed characterization of the optimal realizations. As a result we find a previously unidentified, intermediate form of device-independent certification.

We formally define a pair of MUBs as two orthonormal bases on a d-dimensional Hilbert space d, namely, {ej}j=1d and {fk}k=1d, with the property thatejfk2=1d(1)for all j and k. The constant on the right-hand side is merely a consequence of the two bases being normalized. To this end, consider a bipartite Bell scenario parameterized by an integer d 2. Alice randomly receives one of d2 possible inputs labeled by x x1x2 [d]2 (where [s] {1, , s}) and produces a ternary output labeled by a {1,2, }. Bob receives a random binary input labeled by y {1,2} and produces a d-valued output labeled by b [d]. The joint probability distribution in the Bell scenario is denoted by p(a, bx, y), and the scenario is illustrated in Fig. 1.

Alice receives one of d2 inputs and produces a ternary output, while Bob receives a binary input and produces a d-valued output.

To make our choice of Bell functional transparent, we will phrase it as a game in which Alice and Bob collectively win or lose points. If Alice outputs a = , then no points will be won or lost. If she outputs a {1,2}, then points will be won or lost if b = xy. More specifically, Alice and Bob win a point if a = y and lose a point if a=y, where the bar sign flips the value of y {1,2}. This leads to the scoreRdMUBx,yp(a=y,b=xyx,y)p(a=y,b=xyx,y)(2)where the sum goes over x = x1x2 [d]2 and y {1,2}.

At this point, the outcome a = might seem artificial, so let us show why it plays a crucial role in the construction of the game. To this end, we use intuition based on the hypothetical case in which Alice and Bob share a maximally entangled statedmax=1dk=1dk,k(3)

The reason that we consider the maximally entangled state is that we aim to tailor the Bell inequalities so that this state is optimal. Then, we would like to ensure that Alice, via her measurement and for her outcomes a {1,2}, remotely prepares Bob in a pure state. This would allow Bob to create stronger correlations as compared to the case of Alice remotely preparing his system is a mixed state. Hence, this corresponds to Alices outcomes a {1,2} being represented by rank-one projectors. Since the subsystems of dmax are maximally mixed, it follows that (a = 1x) = p(a = 2x) = 1/d x. Thus, we want to motivate Alice to use a strategy in which she outputs a = with probability p(a = x) = 1 2/d. Our tool for this purpose is to introduce a penalty. Specifically, whenever Alice decides to output a {1,2}, she is penalized by losing d points. Thus, the total score (the Bell functional) readsSdMUBRdMUBdx(p(a=1x)+p(a=2x))(4)

Now, outputting a {1,2} not only contributes toward RdMUB but also causes a penalty d. Therefore, we expect to see a trade-off between d and the rate at which Alice outputs a = . We must suitably choose d such that Alices best strategy is to output a = with (on average over x) the desired probability p(a = x) = 1 2/d. This accounts for the intuition that leads us to the following Bell inequalities for MUBs.

Theorem II.1 (Bell inequalities for MUBs). The Bell functional SdMUB in Eq. 4 withd=12d1d(5)obeys the tight local boundSdMUBLHV2(d1)(112d1d)(6)and the quantum boundSdMUBQd(d1)(7)

Moreover, the quantum bound can be saturated by sharing a maximally entangled state of local dimension d and Bob performing measurements in any two MUBs.

Proof. A complete proof is presented in the Supplementary Materials (section S1A). The essential ingredient to obtain the bound in Eq. 7 is the Cauchy-Schwarz inequality. Furthermore, for local models, by inspecting the symmetries of the Bell functional SdMUB, one finds that the local bound can be attained by Bob always outputting b = 1. This greatly simplifies the evaluation of the bound in Eq. 6.

To see that the bound in Eq. 7 can be saturated in quantum theory, let us evaluate the Bell functional for a particular quantum realization. Let be the shared state, {Px1}x1=1d and {Qx2}x2=1d be the measurement operators of Bob corresponding to y = 1 and y = 2, respectively, and Ax be the observable of Alice defined as the difference between Alices outcome-one and outcome-two measurement operators, i.e., Ax=Ax1Ax2. Then, the Bell functional readsSdMUB=xAx(Px1Qx2)d(Ax1+Ax2)1(8)

Now, we choose the maximally entangled state of local dimension d, i.e., =dmax, and define Bobs measurements as rank-one projectors Px1 = x1x1 and Qx2 = x2x2, which correspond to MUBs, i.e., x1x22 = 1/d. Last, we choose Alices observables as Ax=d/(d1)(Px1Qx2)T, where the prefactor ensures the correct normalization and T denotes the transpose in the standard basis. Note that Ax is a rank-two operator; the corresponding measurement operator Ax1 (Ax2) is a rank-one projector onto the eigenvector of Ax associated to the positive (negative) eigenvalue. Since the subsystems of dmax are maximally mixed, this implies dmax(Ax1+Ax2)1dmax=2/d. Inserting all this into the above quantum model and exploiting the fact that for any linear operator O, we have O1dmax=1OTdmax, we straightforwardly saturate the bound in Eq. 7.

We remark that for the case of d = 2 one could also choose 2 = 0 and retain the property that qubit MUBs are optimal. In this case, the marginal term is not necessary because in the optimal realization, Alice never outputs . Then, the quantum bound becomes 22, and the local bound becomes 2. The resulting Bell inequality resembles the Clauser-Horne-Shimony-Holt (CHSH) inequality (71) not only because it gives the same local and quantum values but also because the optimal realizations coincide. More specifically, the measurements of Bob are precisely the optimal CHSH measurements, whereas the four measurements of Alice correspond to two pairs of optimal CHSH measurements.

Theorem II establishes that a pair of MUBs of any dimension can generate a maximal quantum violation in a Bell inequality test. We now turn to the converse matter, namely, that of device-independent certification. Specifically, given that we observe the maximal quantum violation, i.e., equality in Eq. 7, what can be said about the shared state and the measurements? Since the measurement operators can only be characterized on the support of the state, to simplify the notation, let us assume that the marginal states of Alice and Bob are full rank. (Note that this is not a physical assumption but a mathematical convention that simplifies the notation in the rest of this work. Whenever the marginal state is not full rank, the local Hilbert space naturally decomposes as a direct sum of two terms, where the state is only supported on one of them. The measurement operators can only be characterized on the support of the state, and that is precisely what we achieve. This convention allows us to only write out the part that can be characterized and leave out the rest.)

Theorem II.2 (Device-independent certification). The maximal quantum value of the Bell functional SdMUB in Eq. 4 implies that (i) there exist local isometries that allow Alice and Bob to extract a maximally entangled state of local dimension d, and (i) if the marginal state of Bob is full rank, the two d-outcome measurements that he performs satisfy the relationsPa=dPaQbPaandQb=dQbPaQb(9)for all a and b.

Proof. The proof is detailed in the Supplementary Materials (section S1A). Here, we briefly summarize the part concerning Bobs measurements. Since the Cauchy-Schwarz inequality is the main tool for proving the quantum bound in Eq. 7, saturating it implies that the Cauchy-Schwarz inequality is also saturated. This allows us to deduce that the measurements of Bob are projective, and moreover, we obtain the following optimality conditionAx1=1dd1(Px1Qx2)(10)for all x1, x2 [d] where the factor d/(d1) can be regarded as a normalization. Since we do not attempt to certify the measurements of Alice, we can, without loss of generality, assume that they are projective. This implies that the spectrum of Ax only contains { + 1, 1,0} and therefore (Ax)3 = Ax. This allows us to obtain a relation that only contains Bobs operators. Tracing out Alices system and subsequently eliminating the marginal state of Bob (it is assumed to be full rank) leads toPx1Qx2=dd1(Px1Qx2)3(11)

Expanding this relation and then using projectivity and the completeness of measurements, one recovers the result in Eq. 9.

We have shown that observing the maximal quantum value of SdMUB implies that the measurements of Bob satisfy the relations given in Eq. 9. It is natural to ask whether a stronger conclusion can be derived, but the answer turns out to be negative. In the Supplementary Materials (section S1B), we show that any pair of d-outcome measurements (acting on a finite-dimensional Hilbert space) satisfying the relations in Eq. 9 is capable of generating the maximal Bell inequality violation. For d = 2,3, the relations given in Eq. 9 imply that the unknown measurements correspond to a direct sum of MUBs (see section S2C) and since, in these dimensions, there exists only a single pair of MUBs (up to unitaries and complex conjugation), our results imply a self-testing statement of the usual kind. However, since, in higher dimensions, not all pairs of MUBs are equivalent (72), our certification statement is less informative than the usual formulation of self-testing. In other words, our inequalities allow us to self-test the quantum state, but we cannot completely determine the measurements [see (73, 74) for related results]. Note that we could also conduct a device-independent characterization of the measurements of Alice. Equation 61 from the Supplementary Materials enables us to relate the measurements of Alice to the measurements of Bob, which we have already characterized. However, since we do not expect the observables of Alice to satisfy any simple algebraic relations and since they are not directly relevant for the scope of this work (namely, MUBs and SICs), we do not pursue this direction.

The certification provided in Theorem II.2 turns out to be sufficient to determine all the probabilities p(a, b x, y) that arise in the Bell experiment (see section S1C), which means that the maximal quantum value of SdMUB is achieved by a single probability distribution. Because of the existence of inequivalent pairs of MUBs in certain dimensions (e.g., for d = 4), this constitutes the first example of an extremal point of the quantum set, which admits inequivalent quantum realizations. Recall that the notion of equivalence that we use is precisely the one that appears in the context of self-testing, i.e., we allow for additional degrees of freedom, local isometries, and a transposition.

It is important to understand the relation between the condition given in Eq. 9 and the concept of MUBs. Naturally, if {Pa}a=1d and {Qb}b=1d are d-dimensional MUBs, then the relations (Eq. 9) are satisfied. However, there exist solutions to Eq. 9 that are neither MUBs nor direct sums thereof. While, as mentioned above, for d = 2,3, one can show that any measurements satisfying the relations (Eq. 9) must correspond to a direct sum of MUBs, this is not true in general. For d = 4,5, we have found explicit examples of measurement operators satisfying Eq. 9, which cannot be written as a direct sum of MUBs. They cannot even be transformed into a pair of MUBs via a completely positive unital map (see section S2 for details). These results beg the crucial question: How should one interpret the condition given in Eq. 9?

To answer this question, we resort to an operational formulation of what it means for two measurements to be mutually unbiased. An operational approach must rely on observable quantities (i.e., probabilities), as opposed to algebraic relations between vectors or operators. This notion, which we refer to as mutually unbiased measurements (MUMs), was recently formalized by Tasca et al. (75). Note that in what follows, we use the term eigenvector to refer to eigenvectors corresponding to nonzero eigenvalues.

Definition II.3 (MUMs). We say that two n-outcome measurements {Pa}a=1n and {Qb}b=1n are mutually unbiased if they are projective and the following implications holdPa=1Qb=1nQb=1Pa=1n(12)for all a and b. That is, two projective measurements are mutually unbiased if the eigenvectors of one measurement give rise to a uniform outcome distribution for the other measurement.

Note that this definition precisely captures the intuition behind MUBs without the need to specify the dimension of the underlying Hilbert space. MUMs admit a simple algebraic characterization.

Theorem II.4. Two n-outcome measurements {Pa}a=1n and {Qb}b=1n are mutually unbiased if and only ifPa=nPaQbPaandQb=nQbPaQb(13)for all a and b.

Proof. Let us first assume that the algebraic relations hold. By summing over the middle index, one finds that both measurements are projective. Moreover, if is an eigenvector of Pa, then Qb=PaQbPa=1nPa=1n

By symmetry, the analogous property holds if is an eigenvector of Qb. Conversely, let us show that MUMs must satisfy the above algebraic relations. Since aPa=1, we can choose an orthonormal basis of the Hilbert space composed only of the eigenvectors of the measurement operators. Let {eja}a,j be an orthonormal basis, where a [n] tells us which projector the eigenvector corresponds to and j labels the eigenvectors within a fixed projector (if Pa has finite rank, then j [tr Pa]; otherwise, j ). By construction, for such a basis, we have

Paeja=aaeja. To show that Pa = nPaQbPa, it suffices to show that the two operators have the same coefficients in this basis. SinceejanPaQbPaeka=naaaaejaQbeka(14)ejaPaeka=aaaajk(15)it suffices to show that nejaQbeka=jk. For j = k, this is a direct consequence of the definition in Eq. 12. To prove the other case, define =(eja+eieka)/2, for [0,2). Since Pa = , we have Qb = 1/n. Writing this equality out gives1n=12(2n+eiejaQbeka+eiekaQbeja)(16)

Choosing = 0 implies that the real part of ejaQbeka vanishes, while = /2 implies that the imaginary part vanishes. Proving the relation Qb = nQbPaQb proceeds in an analogous fashion.

Theorem II.4 implies that the maximal violation of the Bell inequality for MUBs certifies precisely the fact the Bobs measurements are mutually unbiased. To provide further evidence that MUMs constitute the correct device-independent generalization of MUBs, we give two specific situations in which the two objects behave in the same manner.

Maassen and Uffink (16) considered a scenario in which two measurements (with a finite number of outcomes) are performed on an unknown state. Their famous uncertainty relation provides a state-independent lower bound on the sum of the Shannon entropies of the resulting distributions. While the original result only applies to rank-one projective measurements, a generalization to nonprojective measurements reads (76)H(P)+H(Q)logc(17)where H denotes the Shannon entropy and c=maxa,bPaQb2, where is the operator norm. If we restrict ourselves to rank-one projective measurements on a Hilbert space of dimension d, then one finds that the largest uncertainty, corresponding to c = 1/d, is obtained only by MUBs. It turns out that precisely the same value is achieved by any pair of MUMs with d outcomes regardless of the dimension of the Hilbert spacec=maxa,bPaQb2=maxa,bPaQb2=maxa,bPaQbPa=maxaPa/d=1d(18)

A closely related concept is that of measurement incompatibility, which captures the phenomenon that two measurements cannot be performed simultaneously on a single copy of a system. The extent to which two measurements are incompatible can be quantified, e.g., by so-called incompatibility robustness measures (77). In the Supplementary Materials (section S2D), we show that according to these measures, MUMs are exactly as incompatible as MUBs. Moreover, we can show that for the so-called generalized incompatibility robustness (78), MUMs are among the most incompatible pairs of d-outcome measurements.

The fact that the maximal quantum violation of the Bell inequalities introduced above requires a maximally entangled state and MUMs and, moreover, that it is achieved by a unique probability distribution suggests that these inequalities might be useful for device-independent quantum information processing. In the task of quantum key distribution (3, 17, 18), Alice and Bob aim to establish a shared dataset (a key) that is secure against a malicious eavesdropper. Such a task requires the use of incompatible measurements, and MUBs in dimension d = 2 constitute the most popular choice. Since, in the ideal case, the measurement outcomes of Alice and Bob that contribute to the key should be perfectly correlated, most protocols are based on maximally entangled states. In the device-independent approach to quantum key distribution, the amount of key and its security is deduced from the observed Bell inequality violation.

We present a proof-of-principle application to device-independent quantum key distribution based on the quantum nonlocality witnessed through the Bell functional in Eq. 4. In the ideal case, Alice and Bob follow the strategy that gives them the maximal violation, i.e., they share a maximally entangled state of local dimension d and Bob measures two MUBs. To generate the key, we provide Alice with an extra setting that produces outcomes that are perfectly correlated with the outcomes of the first setting of Bob. This will be the only pair of settings from which the raw key will be extracted, and let us denote them by x = x* and y = y* = 1. In most rounds of the experiment, Alice and Bob choose these settings and therefore contribute toward the raw key. However, to ensure security, a small number of rounds is used to evaluate the Bell functional. In these rounds, which are chosen at random, Alice and Bob randomly choose their measurement settings. Once the experiment is complete, the resulting value of the Bell functional is used to infer the amount of secure raw key shared between Alice and Bob. The raw key can then be turned into the final key by standard classical postprocessing. For simplicity, we consider only individual attacks, and moreover, we focus on the limit of asymptotically many rounds in which fluctuations due to finite statistics can be neglected.

The key rate, K, can be lower bounded by (79)Klog(Pg)H(By*Ax*)(19)where Pg denotes the highest probability that the eavesdropper can correctly guess Bobs outcome when his setting is y* given that the Bell inequality value was observed, and H( ) denotes the conditional Shannon entropy. The guessing probability Pg is defined asPgsup{c=1dABE1PcEcABE}(20)where {Ec}c=1d is the measurement used by the eavesdropper to produce her guess, the expression inside the curly braces is the probability that her outcome is the same as Bobs for a particular realization, and the supremum is taken over all quantum realizations (the tripartite state and measurements of all three parties) compatible with the observed Bell inequality value .

Let us first focus on the key rate in a noise-free scenario, i.e., in a scenario in which SdMUB attains its maximal value. Then, one straightforwardly arrives at the following result.

Theorem II.5 (Device-independent key rate). In the noiseless case, the quantum key distribution protocol based on SdMUB achieves the key rate ofK=logd(21)for any integer d 2.

Proof. In the noiseless case, Alice and Bob observe exactly the correlations predicted by the ideal setup. In this case, the outcomes for settings (x*, y*) are perfectly correlated, which implies that H(By*Ax*) = 0. Therefore, the only nontrivial task is to bound the guessing probability.

Since the actions of the eavesdropper commute with the actions of Alice and Bob, we can assume that she performs her measurement first. If the probability of the eavesdropper observing outcome c [d], which we denote by p(c), is nonzero, then the (normalized) state of Alice and Bob conditioned on the eavesdropper observing that outcome is given byAB(c)=1p(c)trC[(11Ec)ABEABE](22)

Now, Alice and Bob share one of the postmeasurement states AB(c), and when they perform their Bell inequality test, they will obtain different distributions depending on c, which we write as pc(a, b x, y). However, since the statistics achieve the maximal quantum value of SdMUB and we have previously shown that the maximal quantum value is achieved by a single probability point, all the probability distributions pc(a, b x, y) must be the same. Moreover, we have shown that for this probability point, the marginal distribution of outcomes on Bobs side is uniform over [d] for both inputs. This implies thatPg=c=1dp(c)pc(b=cy=1)=1d(23)because pc(b=cy=1)=p(b=cy=1)=1d for all c.

We remark that the argument above is a direct consequence of a more general result that states that if a bipartite probability distribution is a nonlocal extremal point of the quantum set, then no external party can be correlated with the outcomes (80). The obtained key rate is the largest possible for general setups in which the key is generated from a d-outcome measurement. In addition, the key rate is optimal for all protocols based on a pair of entangled d-dimensional systems subject to projective measurements. This follows from the fact that projective measurements in d cannot have more than d outcomes. It has recently been shown that the same amount of randomness can be generated using a modified version of the Collins-Gisin-Linden-Massar-Popescu inequalities (61), but note that the measurements used there do not correspond to MUBs (except for the special case of d = 2).

Let us now depart from the noise-free case and estimate the key rate in the presence of noise. To ensure that both the guessing probability and the conditional Shannon entropy can be computed in terms of a single noise parameter, we have to introduce an explicit noise model. We use the standard approach in which the measurements remain unchanged, while the maximally entangled state is replaced with an isotropic state given byv=vdmaxdmax+1vd21(24)where v [0,1] is the visibility of the state. Using this state and the ideal measurements for Alice and Bob, the relation between v and SdMUB can be easily derived from Eq. 8, namelyv=12(1+SdMUBd(d1))(25)

Using this formula, we also obtain the value of H(By* Ax*) as a function of the Bell violation. The remaining part of Eq. 19 is the guessing probability (Eq. 20). In the case of d = 3, we proceed to bound this quantity through semidefinite programming.

Concretely, we implement the three-party semidefinite relaxation (81) of the set of quantum correlations at local level 1 (we attribute one operator to each outcome of Bob and the eavesdropper but only take into account the first two outcomes of Alice). This results in a moment matrix of size 532 532 with 15,617 variables. The guessing probability is directly given by the sum of three elements of the moment matrix. It can then be maximized under the constraints that the value of the Bell functional S3MUB is fixed and the moment matrix is positive semidefinite. However, we notice that this problem is invariant under the following relabeling: b (b) for y = 1, c (c), and x1 (x1), where S3 is a permutation of three elements. Therefore, it is possible to simplify this semidefinite program by requiring the matrix to be invariant under the group action of S3 on the moment matrix (i.e., it is a Reynolds matrix) (43, 82, 83). This reduces the number of free variables in the moment matrix to 2823. With the Self-Dual Minimization (SeDuMi) (84) solver, this lowers the precision (1.1 106 instead of 8.4 108) but speeds up the computation (155 s instead of 8928 s) and requires less memory (0.1 gigabytes instead of 5.5 gigabytes). For the maximal value of SdMUB, we recover the noise-free result of K = log 3 up to the fifth digit. In addition, we have a key rate of at least one bit when SdMUB2.432 and a nonzero key rate when SdMUB2.375. The latter is close to the local bound, which is SdMUB2.367. The resulting lower bound on the key rate as a function of the Bell inequality violation is plotted in Fig. 2.

We now shift our focus from MUBs to SICs. We construct Bell inequalities whose maximal quantum violations are achieved with SICs. We formally define a SIC as a set of d2 unit vectors in d, namely, {rj}j=1d2, with the property thatrjrk2=1d+1(26)for all j k, where the constant on the right-hand side is fixed by normalization. The reason for there being precisely d2 elements in a SIC is that this is the largest number of unit vectors in d that could possibly admit the uniform overlap property (Eq. 26). Moreover, we formally distinguish between a SIC as the presented set of rank-one projectors and a SIC-POVM (positive operator-valued measure), which is the generalized quantum measurement with d2 possible outcomes corresponding to the subnormalized projectors {1drkrk}k=1d2.

Since the treatment of SICs in Bell nonlocality turns out to be more challenging than for the case of MUBs, we first establish the relevance of SICs in a simplified Bell scenario subject to additional constraints. This serves as a stepping stone to a subsequent relaxation, which gives a standard (unconstrained) Bell inequality for SICs. We then focus on the device-independent certification power of these inequalities, which leads us to an operational notion of symmetric informational completeness. Last, we extend the Bell inequalities so that their maximal quantum violations are achieved with both projectors forming SICs and a single generalized measurement corresponding to a SIC-POVM.

Stepping stone: Quantum correlations for SICs. Consider a Bell scenario, parameterized by an integer d 2, involving two parties Alice and Bob who share a physical system. Alice receives an input labeled by a tuple (x1, x2) representing one of (d22) possible inputs, which we collectively refer to as x = x1x2. The tuple is randomly taken from the set Pairs(d2) {x x1, x2 [d2] and x1 < x2}. Alice performs a measurement on her part of the shared system and produces a ternary output labeled by a {1,2, }. Bob receives an input labeled by y [d2], and the associated measurement produces a binary outcome labeled by b {1, }. The joint probability distribution is denoted by p(a, b x, y), and the Bell scenario is illustrated in Fig. 3.

Alice receives one of (d22) inputs and returns a ternary outcome, while Bob receives one of d2 inputs and returns a binary outcome.

Similar to the case of MUBs, to make our choice of Bell functional transparent, we phrase it as a game played by Alice and Bob. We imagine that their inputs are supplied by a referee, who promises to provide x = x1x2 and y such that either y = x1 or y = x2. Similar to the previous game, Alice can output a = to ensure that no points are won or lost. However, in this game also, Bob can ensure that no points are won or lost by outputting b = . If neither of them outputs , then a point is either won or lost. Specifically, when a = 1, a point is won if y = x1 (and lost otherwise), whereas if a = 2, then a point is won if y = x2 (and lost otherwise). Let us remark that in this game, Bobs only role is to decide whether, in a given round, points can be won/lost or not. For this game, the total number of points (the Bell functional) readsRdSICx1

Let us now impose additional constraints on the marginal distributions of the outputs. More specifically, we require thatx:p(a=1x)+p(a=2x)=2dy:p(b=1y)=1d(28)

The intuition behind these constraints is analogous to that discussed for the case of MUBs. Namely, we imagine that Alice and Bob perform measurements on a maximally entangled state of local dimension d. Then, we wish to fix the marginals such that the measurements of Alice (Bob) for the outcomes a {1,2} (b = 1) remotely prepare Bobs (Alices) subsystem in a pure state. This corresponds to the marginals p(a = 1 x) = p(a = 2 x) = p(b = 1 x) = 1/d, which is reflected in the marginal constraints in Eq. 28. We remark that imposing these constraints simplifies both the intuitive understanding of the game and the derivation of the results below. However, it merely serves as a stepping stone to a more general subsequent treatment in which the constraints (Eq. 28) will be removed.

To write the value of the Bell functional of a quantum realization, let us introduce two simplifications. The measurement operators of Alice are denoted by {Axa}, and as before, it is convenient to work with the observables defined as Ax=Ax1Ax2. The measurements of Bob are denoted by {Byb}, but since they only have two outcomes, all the expressions can be written in terms of a single operator from each input y. In our case, it is convenient to use the outcome-one operator, and for convenience, we will skip the superscript,, i.e., we will write ByBy1 for all y. Then, the Bell functional evaluated on a specific quantum realization readsRdSIC=x1

Note that the Bell functional, in particular, when written in a quantum model, is much reminiscent of the expression RdMUB (Eq. 2) encountered for MUBs, with the key difference that the roles of the inputs and outputs of Bob are swapped. Let us consider a quantum strategy in which Alice and Bob share a maximally entangled state dmax. Moreover, Bobs measurements are defined as By = yy, where {y}y=1d2 is a set of unit vectors forming a SIC (assuming it exists in dimension d), i.e., y y2 = 1/(d + 1) for all y y. In addition, we define Alices observables as Ax=(d+1)/d(Bx1Bx2)T, where the prefactor ensures normalization. First, since the subsystems of Alice and Bob are maximally mixed and the outcomes a {1,2} and b = 1 each correspond to rank-one projectors, the marginal constraints in Eq. 28 are satisfied. Using the fact that for any linear operator O we have O1dmax=1OTdmax, we find thatRdSIC=d+1dx1

This strategy relying on a maximally entangled state and a SIC achieves the maximal quantum value of RdSIC under the constraints of Eq. 28. In the Supplementary Materials (section S3A), we prove that under these constraints, the tight quantum and no-signaling bounds on RdSIC readRdSICQd(d1)d(d+1)(31)RdSICNSd(d21)(32)

We remark that SICs are not known to exist in all Hilbert space dimensions. However, their existence in all dimensions is strongly conjectured, and explicit SICs have been found in all dimensions up to 193 (5355).

Bell inequalities for SICs. The marginal constraints in Eq. 28 allowed us to prove that the quantum realization based on SICs achieves the maximal quantum value of RdSIC. Our goal now is to remove these constraints to obtain a standard Bell functional. Analogously to the case of MUBs, we add marginal terms to the original functional RdSIC.

To this end, we introduce penalties for both Alice and Bob. Specifically, if Alice outputs a {1,2}, then they lose d points, whereas if Bob outputs b = 1, then they lose d points. The total number of points in the modified game constitutes our final Bell functionalSdSICRdSICdx1

Hence, our aim is to suitably choose the penalties d and d so that the maximal quantum value of SdSIC is achieved with a strategy that closely mimics the marginal constraints (Eq. 28) and thus maintains the optimality of Bob performing a SIC.

Theorem II.6 (Bell inequalities for SICs). The Bell functional SdSIC in Eq. 33 withd=1d,22dd+1d=d22d(d+1)(34)obeys the tight local boundSdSICLHV{4ford=2d2(d1)d(d2d1)dd+1ford3(35)and the quantum boundSdSICQd+2d,22d(d+1)(36)

Moreover, the quantum bound is tight and can be saturated by sharing a maximally entangled state of local dimension d and choosing Bobs outcome-one projectors to form a SIC.

Proof. The proof is presented in the Supplementary Materials (section S3B). To obtain the quantum bound in Eq. 36, the key ingredients are the Cauchy-Schwarz inequality and semidefinite relaxations of polynomial optimization problems. To derive the local bound in Eq. 35, the key observation is that the symmetries of the Bell functional allow us to notably simplify the problem.

The fact that the quantum bound is saturated by a maximally entangled state and Bob performing a SIC can be seen immediately from the previous discussion that led to Eq. 30. With that strategy, we find RdSIC=d(d1)d(d+1). Since it also respects (a = 1x) + p(a = 2x) = 2/d x, as well as p(b = 1y) = 1/d y, a direct insertion into Eq. 33 saturates the bound in Eq. 36. Note that in the limit of d both the local bound and the quantum bound grow quadratically in d.

We remark that for the special case of d = 2, no penalties are needed to maintain the optimality of SICs (which is why the Kronecker delta appears in Eq. 34). The derived Bell inequality for a qubit SIC (which corresponds to a tetrahedron configuration on the Bloch sphere) can be compared to the so-called elegant Bell inequality (85) whose maximal violation is also achieved using the tetrahedron configuration. While we require six settings of Alice and four settings of Bob, the elegant Bell inequality requires only four settings of Alice and three settings of Bob. However, the additional complexity in our setup carries an advantage when considering the critical visibility of the shared state, i.e., the smallest value of v in Eq. 24 (defining an isotropic state) for which the Bell inequality is violated. The critical visibility for violating the elegant Bell inequality is 86.6%, whereas for our Bell inequality, it is lowered to 81.6%. We remark that on the Bloch sphere, the antipodal points corresponding to the four measurements of Alice and the six measurements of Bob form a cube and a cuboctahedron, respectively, which constitutes an instance of the type of Bell inequalities proposed in (86).

Device-independent certification. Theorem II.6 shows that for any dimension d 2, we can construct a Bell inequality that is maximally violated by a SIC in that dimension (provided that a SIC exists). Let us now consider the converse question, namely, that of device-independent certification. In analogy with the case of MUBs (Eq. 9), we find a simple description of Bobs measurements.

Theorem II.7 (Device-independent certification). The maximal quantum value of the Bell functional SdSIC, provided that the marginal state of Bob is full rank, implies that his measurement operators {By}y=1d2 are projective and satisfyyBy=d1(37)andBy=(d+1)ByByBy(38)for all y y.

A complete proof, which is similar in spirit to the proof of Theorem II.2, can be found in the Supplementary Materials (section S3C). For the special case of d = 2, the conclusion can be made even more accurate: The maximal quantum violation of S2SIC implies that Bobs outcome-one projectors are rank-one projectors acting on a qubit whose Bloch vectors form a regular tetrahedron (up to the three standard equivalences used in self-testing).

Similar to the case of MUBs, we face the key question of interpreting the condition in Eq. 38 and its relation to SICs. Again, in analogy with the case of MUBs, we note that the concept of a SIC references the dimension of the Hilbert space, which should not appear explicitly in a device-independent scenario. Hence, we consider an operational approach to SICs, which must rely on observable quantities (i.e., probabilities). This leads us to the following natural definition of a set of projectors being operationally symmetric informationally complete (OP-SIC).

Definition II.8 (Operational SIC). We say that a set of projectors {Ba}a=1n2 is OP-SIC ifaBa=n1(39)andBa=1Bb=1n+1(40)for all a b.

This definition trivially encompasses SICs as special instances of OP-SICs. An argument analogous to the proof of Theorem II.4 shows that this definition is in fact equivalent to the relations given in Eqs. 37 and 38. Hence, in analogy with the case of MUBs, the property of Bobs measurements certified by the maximal violation of our Bell inequality is precisely the notion of OP-SICs.

Adding a SIC-POVM. The Bell inequalities proposed above (Bell functional SdSIC) are tailored to sets of rank-one projectors forming a SIC. However, it is also interesting to consider a closely related entity, namely, a SIC-POVM, which is obtained simply by normalizing these projectors, so that they can be collectively interpreted as arising from a single measurement. That is, a SIC-POVM on d is a measurement {Ea}a=1d2 in which every measurement operator can be written as Ea=1daa, where the set of rank-one projectors { aa }a forms a SIC. Because of the simple relation between SICs and SIC-POVMs, we can extend the Bell inequalities for SICs proposed above such that they are optimally implemented with both a SIC (as before) and a SIC-POVM.

It is clear that to make SIC-POVMs relevant to the Bell experiment, it must involve at least one setting that corresponds to a d2-outcome measurement. For the Bell scenario previously considered for SICs (see Fig. 3), no such measurement is present. Therefore, we supplement the original Bell scenario by introducing a single additional measurement setting of Alice, labeled by povm, which has d2 outcomes labeled by a [d2]. The modified Bell scenario is illustrated in Fig. 4. We construct the Bell functional TdSIC for this scenario by modifying the previously considered Bell functional SdSICTdSIC=SdSICy=1d2p(a=y,b=povm,y)(41)

This scenario modifies the original Bell scenario for SICs (see Fig. 3) by supplying Alice with an extra setting labeled by povm, which has d2 possible outcomes.

Hence, whenever Bob outputs and the outcome associated to the setting povm coincides with the input of Bob, a point is lost. Evidently, the largest quantum value of TdSIC is no greater than the largest quantum value of SdSIC. For the former to equal the latter, we require that (i) SdSIC reaches its maximal quantum value (which is given in Eq. 36) and (ii) that (a = y, b = povm, y) = 0 y. We have already seen that by sharing a maximally entangled state and Bobs outcome-one projectors {By}y forming a SIC, the condition (i) can be satisfied. By normalization, we have that Bobs outcome- projectors are By=>1By. Again, noting that for any linear operator O we have O1dmax=1OTdmax, observe that if Bob applies By, then Alices local state is orthogonal to By. Hence, if Alice chooses her POVM {Ea}, corresponding to the setting povm, as the SIC-POVM defined by Ea=1dBaT, the probability of finding a = y vanishes. This satisfies condition (ii). Hence, we conclude that in a general quantum modelTdSICQd+2d,22d(d+1)(42)and that the bound can be saturated by supplementing the previous optimal realization with a SIC-POVM on Alices side.

The fact that the Bell functionals SdSIC and TdSIC achieve their maximal quantum values with a SIC and a SIC-POVM, respectively, opens up the possibility for device-independent quantum information protocols for tasks in which SICs and SIC-POVMs are desirable. We focus on one such application, namely, that of device-independent quantum random number generation (87). This is the task of certifying that the data generated by a party cannot be predicted by a malicious eavesdropper. In the device-independent setting, both the amount of randomness and its security are derived from the violation of a Bell inequality.

Nonprojective measurements, such as SIC-POVMs, are useful for this task. The reason is that a Bell experiment implemented with entangled systems of local dimension d and standard projective measurements cannot have more than d outcomes. Consequently, one cannot hope to certify more than log d bits of local randomness. However, Bell experiment relying on d-dimensional entanglement implemented with (extremal) nonprojective measurements can have up to d2 outcomes (88). This opens the possibility of generating up to 2 log d bits of local randomness without increasing the dimension of the shared entangled state. Notably, for the case of d = 2, such optimal quantum random number generation has been shown using a qubit SIC-POVM (42).

Here, we use our Bell inequalities for SIC-POVMs to significantly outperform standard protocols relying on projective measurements on d-dimensional entangled states. To this end, we briefly summarize the scenario for randomness generation. Alice and Bob perform many rounds of the Bell experiment illustrated in Figure 4. Alice will attempt to generate local randomness from the outcomes of her setting labeled by povm. In most rounds of the Bell experiment, Alice performs povm and records the outcome a. In a smaller number of rounds, she randomly chooses her measurement setting, and the data are used toward estimating the value of the Bell functional TdSIC defined in Eq. 41. A malicious eavesdropper may attempt to guess Alices relevant outcome a. To this end, the eavesdropper may entangle her system with that of Alice and Bob and perform a well-chosen POVM {Ec}c to enhance her guess. In analogy to Eq. 20, the eavesdroppers guessing probability readsPgsup{c=1d2ABEApovmc1EcABE}(43)where {Ec}c=1d2 is the measurement used by the eavesdropper to produce her guess, the expression inside the curly braces is the probability that her outcome is the same as Alices outcome for the setting povm for a particular realization, and the supremum is taken over all quantum realizations (the tripartite state and measurements of all three parties) compatible with the observed Bell inequality violation =TdSIC.

We quantify the randomness generated by Alice using the conditional min-entropy Hmin(ApovmE)=log(Pg). To obtain a device-independent lower bound on the randomness, we must evaluate an upper bound on Pg for a given observed value of the Bell functional. We saw in the Application: Device-independent quantum key distribution section that if the eavesdropper is only trying to guess the outcome of a single measurement setting, we can, without loss of generality, assume that they are only classically correlated with the systems of Alice and Bob. As before, we restrict ourselves to the asymptotic limit of many rounds, in which fluctuations due to finite statistics can be neglected.

To bound the randomness for some given value of TdSIC, we use the hierarchy of quantum correlations (81). We restrict ourselves to the cases of d = 2 and d = 3. For the case of d = 2, we construct a moment matrix with the operators {(1,Ax)(1,By)(1,E)}{Apovm(1,By,E)}, neglecting the outcome. The matrix is of size 361 361 with 10,116 variables. Again, we can make use of symmetry to simplify the semidefinite program. In this case, the following permutation leaves the problem invariant: x1 (x1), x2 (x2), a f(a, x1, x2), a (a), y (y), and c (c), wheref(a,x1,x2)={a(x1)<(x2)2(x1)(x2)anda=11(x1)(x2)anda=2(x1)(x2)anda=(44)and S4. Using this symmetry reduces the number of free variables to 477. The trade-off between the amount of certified randomness and the nonlocality is illustrated in Fig. 5. We find that for sufficiently large values of T2SIC (roughly T2SIC4.8718), we outperform the one-bit limitation associated to projective measurements on entangled qubits. Notably, for even larger values of T2SIC, we also outperform the restriction of log 3 bits associated to projective measurements on entangled systems of local dimension three. For the optimal value of T2SIC we find Hmin(Apovm E) 1.999, which is compatible up to numerical precision with the largest possible amount of randomness obtainable from qubit systems under general measurements, namely, two bits. This two-bit limit stems from the fact that every qubit measurement with more than four outcomes can be stochastically simulated with measurements of at most four outcomes (88).

For the case of d = 3, we bound the guessing probability following the method of (87). This has the advantage of requiring only a bipartite, and hence smaller, moment matrix than the tripartite formulation. However, the amount of symmetry leaving the problem invariant is reduced because the objective function only involves one outcome. Concretely, we construct a moment matrix of size 820 820 with 263,549 variables. We then write the guessing probability as P(a = 1povm) and identify the following group of permutations, leaving the problem invariant: x1 (x1), x2 (x2), a f(a, x1, x2), a (a), and y (y), where S9 leaves element 1 invariant and permutes elements 2, ,9 in all possible ways. Taking this symmetry into account reduces the number of free variables to 460. To further simplify the problem, we make use of RepLAB, a recently developed tool that decomposes representations of finite groups into irreducible representations (89, 90). This allows us to write the moment matrix in a preferred basis in which it is block diagonal. The semidefinite constraint can then be imposed on each block independently, with the largest block size 28 28 instead of 820 820. Solving one semidefinite program with SeDuMi (84) then takes 0.7 s with <0.1 gigabytes of memory instead of 162 s/0.2 gigabytes without block diagonalization and fails because of lack of memory without any symmetrization (>400 gigabytes required).

Using entangled states of dimension 3 and corresponding SIC-POVMs, one can attain the full range of values for T3SIC. The guessing probability is independent of the outcome guessed by the eavesdropper, and we can verify that the bound that we obtain is convex, hence guaranteeing that no mixture of strategy by the eavesdropper must be considered (87). The randomness is then given in Fig. 6, which indicates that by increasing the value of T3SIC, we can obtain more randomness than the best possible schemes relying on standard projective measurements and entangled systems of dimensions 3,4,5,6, and 7. In particular, in the case of T3SIC being maximal, we find that Hmin(ApovmE) 3.03 bits. This is larger than what can be obtained by performing projective measurements on eight dimensional systems (since log 8 = 3 bits). It is, however, worth noting that this last value is obtained at the boundary of the set of quantum correlations where the precision of the solver is significantly reduced (in particular, the DIMACS errors at this point are of the order of 104). It is not straightforward to estimate the extent to which this reduced precision may influence the guessing probability, so it would be interesting to reproduce this computation with a more precise solver such as SDPA (91).

Acknowledgments: We would like to thank T. de Lima Silva and N. Gisin for fruitful discussions. We thank M. Arajo for helpful comments. Funding: This work was supported by the Swiss National Science Foundation (starting grant DIAQ, NCCRQSIT). A.T. acknowledges support from the Swiss National Science Foundation (Early PostDoc Mobility fellowship P2GEP2 194800). The project Robust certification of quantum devices is carried out within the HOMING programme of the Foundation for Polish Science cofinanced by the European Union under the European Regional Development Fund. M.F. acknowledges support from the Polish NCN grant Sonata UMO-2014/14/E/ST2/00020, the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme ERC AdG CERQUTE (grant agreement no. 834266), the State Research Agency (AEI) TRANQI (PID2019-106888GB-I00/10.13039/501100011033), the Government of Spain (FIS2020-TRANQI; Severo Ochoa CEX2019-000910-S), Fundaci Cellex, Fundaci Mir-Puig, and Generalitat de Catalunya (CERCA, AGAUR). Author contributions: A.T. and J.K. proposed the basic concept. A.T., M.F., J.-D.B., and J.K. developed the theory and the proofs. D.R. developed a software that was used to facilitate particular computations. A.T., M.F., J.-D.B., and J.K. discussed the results and participated in the writing of the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.

Link:

Mutually unbiased bases and symmetric informationally complete measurements in Bell experiments - Science Advances

In Violation of Einstein, Black Holes Might Have ‘Hair’ – Quanta Magazine

Identical twins have nothing on black holes. Twins may grow from the same genetic blueprints, but they can differ in a thousand ways from temperament to hairstyle. Black holes, according to Albert Einsteins theory of gravity, can have just three characteristics mass, spin and charge. If those values are the same for any two black holes, it is impossible to discern one twin from the other. Black holes, they say, have no hair.

In classical general relativity, they would be exactly identical, said Paul Chesler, a theoretical physicist at Harvard University. You cant tell the difference.

Yet scientists have begun to wonder if the no-hair theorem is strictly true. In 2012, a mathematician named Stefanos Aretakis then at the University of Cambridge and now at the University of Toronto suggested that some black holes might have instabilities on their event horizons. These instabilities would effectively give some regions of a black holes horizon a stronger gravitational pull than others. That would make otherwise identical black holes distinguishable.

However, his equations only showed that this was possible for so-called extremal black holes ones that have a maximum value possible for either their mass, spin or charge. And as far as we know, these black holes cannot exist, at least exactly, in nature, said Chesler.

But what if you had a near-extremal black hole, one that approached these extreme values but didnt quite reach them? Such a black hole should be able to exist, at least in theory. Could it have detectable violations of the no-hair theorem?

A paper published late last month shows that it could. Moreover, this hair could be detected by gravitational wave observatories.

Aretakis basically suggested there was some information that was left on the horizon, said Gaurav Khanna, a physicist at the University of Massachusetts and the University of Rhode Island and one of the co-authors. Our paper opens up the possibility of measuring this hair.

In particular, the scientists suggest that remnants either of the black holes formation or of later disturbances, such as matter falling into the black hole, could create gravitational instabilities on or near the event horizon of a near-extremal black hole. We would expect that the gravitational signal we would see would be quite different from ordinary black holes that are not extremal, said Khanna.

If black holes do have hair thus retaining some information about their past this could have implications for the famous black hole information paradox put forward by the late physicist Stephen Hawking, said Lia Medeiros, an astrophysicist at the Institute for Advanced Study in Princeton, New Jersey. That paradox distills the fundamental conflict between general relativity and quantum mechanics, the two great pillars of 20th-century physics. If you violate one of the assumptions [of the information paradox], you might be able to solve the paradox itself, said Medeiros. One of the assumptions is the no-hair theorem.

The ramifications of that could be broad. If we can prove the actual space-time of the black hole outside of the black hole is different from what we expect, then I think that is going to have really huge implications for general relativity, said Medeiros, who co-authored a paper in October that addressed whether the observed geometry of black holes is consistent with predictions.

Perhaps the most exciting aspect of this latest paper, however, is that it could provide a way to merge observations of black holes with fundamental physics. Detecting hair on black holes perhaps the most extreme astrophysical laboratories in the universe could allow us to probe ideas such as string theory and quantum gravity in a way that has never been possible before.

One of the big issues [with] string theory and quantum gravity is that its really hard to test those predictions, said Medeiros. So if you have anything thats even remotely testable, thats amazing.

There are major hurdles, however. Its not certain that near-extremal black holes exist. (The best simulations at the moment typically produce black holes that are 30% away from being extremal, said Chesler.) And even if they do, its not clear if gravitational wave detectors would be sensitive enough to spot these instabilities from the hair.

Whats more, the hair is expected to be incredibly short-lived, lasting just fractions of a second.

But the paper itself, at least in principle, seems sound. I dont think that anybody in the community doubts it, said Chesler. Its not speculative. It just turns out Einsteins equations are so complicated that were discovering new properties of them on a yearly basis.

The next step would be to see what sort of signals we should be looking for in our gravitational detectors either LIGO and Virgo, operating today, or future instruments like the European Space Agencys space-based LISA instrument.

One should now build upon their work and really compute what would be the frequency of this gravitational radiation, and understand how we could measure and identify it, said Helvi Witek, an astrophysicist at the University of Illinois, Urbana-Champaign. The next step is to go from this very nice and important theoretical study to what would be the signature.

There are plenty of reasons to want to do so. While the chances of a detection that would prove the paper correct are slim, such a discovery would not only challenge Einsteins theory of general relativity but prove the existence of near-extremal black holes.

We would love to know if nature would even allow for such a beast to exist, said Khanna. It would have pretty dramatic implications for our field.

Correction: February 11, 2021The original version of this article implied that theorists are unable to simulate black holes closer than 30% away from being extremal. In fact, they can simulate near-extremal black holes, but their typical simulations are 30% away from being extremal.

Read more:

In Violation of Einstein, Black Holes Might Have 'Hair' - Quanta Magazine

A Magnetic Twist to Graphene Could Offer a Dramatic Increase in Processing Speeds Compared to Electronics – SciTechDaily

Schematic of a valley-spiral in magnetically encapsulated twisted bilayer graphene. Credit: Jose Lado

By combining ferromagnets and two rotated layers of graphene, researchers open up a new platform for strongly interacting states using graphenes unique quantum degree of freedom.

Electrons in materials have a property known as spin, which is responsible for a variety of properties, the most well-known of which is magnetism. Permanent magnets, like the ones used for refrigerator doors, have all the spins in their electrons aligned in the same direction. Scientists refer to this behavior as ferromagnetism, and the research field of trying to manipulate spin as spintronics.

Down in the quantum world, spins can arrange in more exotic ways, giving rise to frustrated states and entangled magnets. Interestingly, a property similar to spin, known as the valley, appears in graphene materials. This unique feature has given rise to the field of valleytronics, which aims to exploit the valley property for emergent physics and information processing, very much like spintronics relies on pure spin physics.

Valleytronics would potentially allow encoding information in the quantum valley degree of freedom, similar to how electronics do it with charge and spintronics with the spin. Explains Professor Jose Lado, from Aaltos Department of applied physics, and one of the authors of the work. Whats more, valleytronic devices would offer a dramatic increase in the processing speeds in comparison with electronics, and with much higher stability towards magnetic field noise in comparison with spintronic devices.

Structures made of rotated, ultra-thin materials provide a rich solid-state platform for designing novel devices. In particular, slightly twisted graphene layers have recently been shown to have exciting unconventional properties, that can ultimately lead to a new family of materials for quantum technologies. These unconventional states which are already being explored depend on electrical charge or spin. The open question is if the valley can also lead to its own family of exciting states.

For this goal, it turns out that conventional ferromagnets play a vital role, pushing graphene to the realms of valley physics. In a recent work, Ph.D. student Tobias Wolf, together with Profs. Oded Zilberberg and Gianni Blatter at ETH Zurich, and Prof. Jose Lado at Aalto University, showed a new direction for correlated physics in magnetic van der Waals materials.

The team showed that sandwiching two slightly rotated layers of graphene between a ferromagnetic insulator provides a unique setting for new electronic states. The combination of ferromagnets, graphenes twist engineering, and relativistic effects force the valley property to dominate the behavior of the electrons in the material. In particular, the researchers showed how these valley-only states can be tuned electrically, providing a materials platform in which valley-only states can be generated. Building on top of the recent breakthrough in spintronics and van der Waals materials, valley physics in magnetic twisted van der Waals multilayers opens the door to the new realm of correlated twisted valleytronics.

Demonstrating these states represents the starting point towards new exotic entangled valley states. Said Professor Lado, Ultimately, engineering these valley states can allow realizing quantum entangled valley liquids and fractional quantum valley Hall states. These two exotic states of matter have not been found in nature yet, and would open exciting possibilities towards a potentially new graphene-based platform for topological quantum computing.

Reference: Spontaneous Valley Spirals in Magnetically Encapsulated Twisted Bilayer Graphene by Tobias M.R. Wolf, Oded Zilberberg, Gianni Blatter and Jose L. Lado, 4 February 2021, Physical Review Letters.DOI: 10.1103/PhysRevLett.126.056803

Read more:

A Magnetic Twist to Graphene Could Offer a Dramatic Increase in Processing Speeds Compared to Electronics - SciTechDaily

Dont Tell Einstein, but Black Holes Might Have Hair – WIRED

Identical twins have nothing on black holes. Twins may grow from the same genetic blueprints, but they can differ in a thousand waysfrom temperament to hairstyle. Black holes, according to Albert Einsteins theory of gravity, can have just three characteristicsmass, spin and charge. If those values are the same for any two black holes, it is impossible to discern one twin from the other. Black holes, they say, have no hair.

In classical general relativity, they would be exactly identical, said Paul Chesler, a theoretical physicist at Harvard University. You cant tell the difference.

Yet scientists have begun to wonder if the no-hair theorem is strictly true. In 2012, a mathematician named Stefanos Aretakisthen at the University of Cambridge and now at the University of Torontosuggested that some black holes might have instabilities on their event horizons. These instabilities would effectively give some regions of a black holes horizon a stronger gravitational pull than others. That would make otherwise identical black holes distinguishable.

However, his equations only showed that this was possible for so-called extremal black holesones that have a maximum value possible for either their mass, spin, or charge. And as far as we know, these black holes cannot exist, at least exactly, in nature, said Chesler.

But what if you had a near-extremal black hole, one that approached these extreme values but didnt quite reach them? Such a black hole should be able to exist, at least in theory. Could it have detectable violations of the no-hair theorem?

A paper published late last month shows that it could. Moreover, this hair could be detected by gravitational wave observatories.

Aretakis basically suggested there was some information that was left on the horizon, said Gaurav Khanna, a physicist at the University of Massachusetts and the University of Rhode Island and one of the coauthors. Our paper opens up the possibility of measuring this hair.

In particular, the scientists suggest that remnants either of the black holes formation or of later disturbances, such as matter falling into the black hole, could create gravitational instabilities on or near the event horizon of a near-extremal black hole. We would expect that the gravitational signal we would see would be quite different from ordinary black holes that are not extremal, said Khanna.

If black holes do have hairthus retaining some information about their pastthis could have implications for the famous black hole information paradox put forward by the late physicist Stephen Hawking, said Lia Medeiros, an astrophysicist at the Institute for Advanced Study in Princeton, New Jersey. That paradox distills the fundamental conflict between general relativity and quantum mechanics, the two great pillars of 20th-century physics. If you violate one of the assumptions [of the information paradox], you might be able to solve the paradox itself, said Medeiros. One of the assumptions is the no-hair theorem.

The ramifications of that could be broad. If we can prove the actual space-time of the black hole outside of the black hole is different from what we expect, then I think that is going to have really huge implications for general relativity, said Medeiros, who coauthored a paper in October that addressed whether the observed geometry of black holes is consistent with predictions.

Perhaps the most exciting aspect of this latest paper, however, is that it could provide a way to merge observations of black holes with fundamental physics. Detecting hair on black holesperhaps the most extreme astrophysical laboratories in the universecould allow us to probe ideas such as string theory and quantum gravity in a way that has never been possible before.

One of the big issues with string theory and quantum gravity is that its really hard to test those predictions, said Medeiros. So if you have anything thats even remotely testable, thats amazing.

There are major hurdles, however. Its not certain that near-extremal black holes exist. (The best simulations at the moment typically produce black holes that are 30 percent away from being extremal, said Chesler.) And even if they do, its not clear if gravitational wave detectors would be sensitive enough to spot these instabilities from the hair.

Read the original here:

Dont Tell Einstein, but Black Holes Might Have Hair - WIRED

This ‘Quantum Brain’ Would Mimic Our Own to Speed Up AI – Singularity Hub

Unless youre in the lithium battery or paint business, youre probably not familiar with cobalt. Yet according to a new paper, it may be the secret sauce for an entirely new kind of computerone that combines quantum mechanics with the brains inner workings.

The result isnt just a computer with the ability to learn. The mechanisms that allow it to learn are directly embedded in its hardware structureno extra AI software required. The computer model also simulates how our brains process information, using the language of neuron activity and synapses, rather than the silicon-based churning CPUs in our current laptops.

The main trick relies on the quantum spin properties of cobalt atoms. When cleverly organized into networks, the result is a quantum brain that can process data and save it inside the same network structuresimilar to how our brains work. To sum up: its a path towards a true learning machine.

Thats great news for AI. Powerful as it is, machine learning algorithms are extremely energy-hungry. While the tech giants have massive data centers tailored to process computational needs, its inefficient and generates a huge carbon footprint. More troubling is when experts look ahead. Although computing prowess has doubled every year and half to two yearsknown colloquially as Moores lawrecent observations show that it may be on its last legs.

Translation? We desperately need alternate computing methods.

Our new idea of building a quantum brain based on the quantum properties of materials could be the basis for a future solution for applications in AI, said lead author Dr. Alexander Khajetoorians at Radboud University in Nijmegen, the Netherlands.

How can neuroscience, quantum mechanics, and AI mesh?

It starts with similarities between the brain and machine learning methods like deep learning. No surprise here, since the latter was loosely based on our minds. The problem comes when these algorithms are run on current computers. You see, even state-of-the-art computers process information and store them in separate structures. The CPU or GPU, by itself, cant store data. This means that data needs to be constantly shuttled between the processing and memory units. Its not a big deal for small things, like recognizing images, but for larger problems it rapidly slows the whole process down, while increasing energy use.

In other words, because AI mimics the brain, which has a completely alien structure to modern computers, theres a fundamental incompatibility. While AI algorithms can be optimized for current computers, theyre likely to hit a dead end when it comes to efficiency.

Enter neuromorphic computing. It asks you to forget everything you know about computer designchips, CPUs, memory hard drives. Instead, this type of new-age computer taps into the brains method for logging, processing, and storing informationall in one place. No data shuttling means less time and energy consumption, a win for AI and for the planet.

In rough strokes, the brains neural networks use several types of computing. One relies on the neuron, which determines based on input whether it should firethat is, pass on the data to its neighbor. Another method uses synapses, which fine-tunes the degree a neuron can transmit the data and store them at the same time, using states. Say you have a network of neurons, connected by synapses, that collectively store a chili recipe. You learned that adding bacon and beer makes it better. The synapses, while processing this new datawhat we call learningalso update their state to encode and store the new information.

The takeaway: in the brain, data processing, learning, and memory all occur at the same spot.

Still with me? Now for the third member of our mnage troiscobalt.

To tackle the problem of learning hardware, back in 2018 the team found that single cobalt atoms could potentially take over the role of neurons. At this atomic level, the mechanics of quantum physics also come into play, with some seriously intriguing results. For example, an atom can have multiple statescalled spinsimultaneously. At any time, an atom will have a probability to be in one state, and another probability for a different statea bit similar to whether a neuron decides to fire or not, or a synapse will pass on data or not. In quantum mechanics, this weird is the cat alive or dead state is dubbed superposition.

Another feature, quantum coupling, allows two atoms to functionally bind together so that the quantum spin state of one atom changes anothersimilar to neurons talking and bonding with each other.

The teams insight is that they could leverage these quantum properties to build a system similar to neurons and synapses in the brain. To do so, they fabricated a system that overlays multiple cobalt atoms on top of a superconducting surface made of black phosphorus.

They then tested whether they could induce firing and networking between the cobalt neurons. For example, is it possible to embed information in the atoms spin states? Can we make these atoms simulate a neuron firing?

The answer is a clear yes. Using tiny currents, the team fed the system simple binary data of 0s and 1s. Rather than encoding practical informationsuch as an image or soundthe data here represented different probabilities of atoms in the system encoding 0 or 1.

Next, the team zapped the network of atoms with a small voltage change, similar to the input our neurons receive. The tiny electrical zap generated behavior eerily similar to the brains mechanics. For example, it double-tapped the system, so that the quantum brain exhibited both processes analogous to neurons firing and changes in their synapses.

This is especially neat: other neuromorphic computing systemsthose based on the braingenerally focus on either an artificial neuron or artificial synapses. Many are built from rare materials requiring strict temperatures to function. Combining both inside a single material, cobalt, isnt just novel. Its efficient, more affordable, and easier.

Similar to neurobiology, the systems synapses also changed with time, based on the electrical input they experienced.

When stimulating the material over a longer period of time with a certain voltage, we were very surprised to see that the synapses actually changed, said Khajetoorians. The material adapted its reaction based on the external stimuli that it received. It learned by itself.

Not quite yet.

For now, the team will have to scale up their system, and demonstrate that it can process real-world information. Theyll also need to build a machine based on the entire setup, showing that it works not just in bits and pieces, but practically as a whole. And theres always competition from customized AI-tailored chips, now being optimized by many tech giants.

But the quantum brain is nothing to roll your eyes at. With one major component, the team was able to mimic key brain processesneuron firing, synapse processing, and learningat an atomic scale. With the rise of quantum computing, algorithms tailored to the machines spooky action at a distance could further increase the systems efficiency. Parallel processing, something our brains do very well but that stumps modern computers, has been scientists stretch goal for quantum computers since the 1990s.

For their next pursuit, the team plans to uncover more quantum materials with different properties that may be more efficient than cobalt. And theyd like to dig into why the quantum brain works as well as it does.

We are at a state where we can start to relate fundamental physics to concepts in biology, like memory and learning, said Khajetoorians. Yet, only when we understand how it worksand that is still a mysterywill we be able to tune its behavior and start developing it into a technology.

Despite the unknowns, the study opens up an exciting field at the nexus between neuroscience, quantum computing, and AI. It is a very exciting time, said Khajetoorians.

Image Credit:Raman OzafromPixabay

See the original post here:

This 'Quantum Brain' Would Mimic Our Own to Speed Up AI - Singularity Hub

Scientists narrow down the ‘weight’ of dark matter trillions of trillions of times – Livescience.com

Scientists are finally figuring out how much dark matter the almost imperceptible material said to tug on everything, yet emit no light really weighs.

The new estimate helps pin down how heavy its particles could be with implications for what the mysterious stuff actually is.

The research sharply narrows the potential mass of dark matter particles, from between an estimated 10^minus 24 electronvolts (eV) and 10^19 Gigaelectron volts (GeV) , to between 10^minus 3 eV and 10^7eV a possible range of masses many trillions of trillions of times smaller than before.

The findings could help dark matter hunters focus their efforts on the indicated range of particle masses or they might reveal a previously unknown force is at work in the universe, said Xavier Calmet, a professor of physics and astronomy at the University of Sussex in the United Kingdom.

Related: The 11 biggest unanswered questions about dark matter

Calmet, along with doctoral student Folkert Kuipers, also of the University of Sussex, described their efforts in a new study to be published in the March issue of Physical Letters B.

Space.com Collection: $26.99 at Magazines Direct

Get ready to explore the wonders of our incredible universe! The "Space.com Collection" is packed with amazing astronomy, incredible discoveries and the latest missions from space agencies around the world. From distant galaxies to the planets, moons and asteroids of our own solar system, youll discover a wealth of facts about the cosmos, and learn about the new technologies, telescopes and rockets in development that will reveal even more of its secrets.View Deal

By some estimates, dark matter makes up about 83% of all the matter in the universe. Its thought only to interact with light and ordinary matter through gravity, which means it can only be seen by the way it curves light rays.

Astronomers found the first hints of dark matter when gazing at a galactic cluster in the 1930s, and theories that galaxies are threaded with and fringed by vast halos of dark matter became mainstream after the 1970s, when astronomers realized galaxies were whirling faster than they otherwise should, given how much visible matter they contained.

Related: The 12 strangest objects in the universe

Possible candidates for dark matter particles include ghostly, tiny particles known as neutrinos, theoretical dark, cold particles known as axions, and proposed weakly-interacting massive particles, or WIMPs. The new mass bounds could help eliminate some of these candidates, depending on the details of the specific dark matter model, Calmet said.

What scientists do know is that dark matter seems to interact with light and normal matter only through gravity, and not via any of the other fundamental forces; and so the researchers used gravitational theories to arrive at their estimated range for the masses of dark matter particles.

Importantly, they used concepts from theories of quantum gravity, which resulted in a much narrower range than the previous estimates, which used only Einstein's theory of general relativity.

"Our idea was a very simple one," Calmet told Live Science in an email. "It is amazing that people have not thought of this before."

Einstein's theory of general relativity is based on classical physics; it perfectly predicts how gravity works most of the time, but it breaks down in extreme circumstances where quantum mechanical effects become significant, such as at the center of a black hole.

Theories of quantum gravity, on the other hand, try to explain gravity through quantum mechanics, which can already describe the other three known fundamental forces electromagnetic force, the strong force that holds most matter together, and the weak force that causes radioactive decay. None of the quantum gravity theories, however, as yet have strong evidence to support them.

Calmet and Kuipers estimated the lower bound for the mass of a dark matter particle using values from general relativity, and estimated the upper bound from the lifetimes of dark matter particles predicted by quantum gravity theories. The nature of the values from general relativity also defined the nature of the upper bound, so they were able to derive a prediction that was independent of any particular model of quantum gravity, Calmet said.

The study found that while quantum gravitational effects were generally almost insignificant, they became important when a hypothetical dark matter particle took an extremely long time to decay and when the universe was about as old as it is now (roughly 13.8 billion years), he said.

Physicists previously estimated that dark matter particles had to be lighter than the "Planck mass" about 1.2 x 10^19 GeV, at least a 1,000 times heavier than the largest-known particles yet heavier than 10^minus 24 eV to fit with observations of the smallest galaxies known to contain dark matter, he said.

But until now, few studies had attempted to narrow the range, even though great progress had been made in understanding quantum gravity over the last 30 years, he said. "People simply did not look at the effects of quantum gravity on dark matter before."

Calmet said the new bounds for the masses of dark matter particles, could also be used to test whether gravity alone interacts with dark matter, which is widely assumed, or if dark matter is influenced by an unknown force of nature.

"If we found a dark matter particle with a mass outside the range discussed our paper, we would not only have discovered dark matter, but also very strong evidence that there is some new force beyond gravity acting on dark matter," he said.

Originally published on Live Science.

Continue reading here:

Scientists narrow down the 'weight' of dark matter trillions of trillions of times - Livescience.com

Dr. William Audeh – The Gazette

DR. WILLIAM A. AUDEH Cedar Rapids

Dr. William A. Audeh, 92, of Cedar Rapids, passed away peacefully at his home surrounded by his family, on Feb. 3, 2021, after a long illness. Dr. Audeh was a Board-Certified General Surgeon, providing care to the people of the Cedar Rapids community for over 25 years, from 1965 to 1990. Dr. Audeh was born of Palestinian Christian parents, on Jan. 7, (the Orthodox Christmas), 1929, in Khartoum, Sudan, where his father, also a physician, was in medical practice at the time. The family soon returned to their ancestral home of many generations in Nazareth, Palestine, where he grew up, surrounded by his siblings, and many aunts, uncles and cousins. He later attended Bishop Gobat High School in Jerusalem. While still a teenager, Dr. Audeh and his family were forced to leave their home in Palestine, as were many Palestinian Arab families, in 1948, and fled to Beirut, Lebanon for safety, as Palestinian refugees. Despite these hardships, he graduated from the American University of Beirut (AUB) with his Medical Degree in 1953, and began a lifetime of happiness when he married his wife Sameera (nee Azzam) in 1954, in Beirut. The couple emigrated to the United States in 1959, obtaining their U.S. Citizenship, and settling first in Omaha, Neb., where Dr. Audeh completed his Residency Training in Surgery at Creighton University in 1961. Dr. Audeh began his Surgical practice at the Kuker Clinic and St. Anthony's Hospital in Carrol, Iowa, from 1961 to 1964, before coming to Cedar Rapids in December 1964. During his many years of surgical practice in Cedar Rapids, Dr. Audeh was a member of the Medical Staff of both Mercy Medical Center and St. Luke's Hospital, serving as Chief of Surgery for a time. Dr. Audeh was an innovative and pioneering surgeon, bringing the latest technology and surgical techniques to his patients. He performed the first gastroscopy in Cedar Rapids in the 1970s, using a flexible scope (displayed in a glass case in Mercy Emergency Room) to detect upper intestinal bleeding. Most importantly, he performed the first "lumpectomy" for breast cancer ever performed in Cedar Rapids, in the 1980s, a procedure which allowed women with breast cancer to avoid mastectomy, and which is now the standard of care nationwide. Over his many years of practice, Dr. Audeh was a passionate and dedicated physician, providing surgical care to many hundreds of men, women and children in Cedar Rapids and surrounding communities. After his retirement, as an Emeritus member of the Mercy Medical Staff, he remained active in medical meetings, and regularly attended the Breast Cancer Tumor Board at Mercy Medical Center, providing his knowledge and years of experience to his colleagues. Dr. Audeh had many interests outside of Medicine. He was an avid reader of books on philosophy, quantum physics and science fiction, and was a fan of "Star Trek" through every series. He had a lifelong love of airplanes, having seen Hawker Hurricanes and Spitfires as a boy during World War II, and obtained a Pilot's license, logging many hours of flight in small single engine planes. In retirement, he applied his surgical skills to oil painting, and painted many beautiful scenes of the Iowa landscape, a number of which were displayed in an exhibition at Mercy Medical Center. Known as "Bill" to his friends, he enjoyed the company of his friends and colleagues, and for many years was a member of a weekly poker group, made up of retired physicians. Although Iowa was his adopted home, Dr. Audeh embraced everything Iowan, and particularly enjoyed " The Music Man" as his favorite film. Dr. Audeh's ultimate love and devotion was to his family, for whom he was a wonderful husband, father, grandfather and role model. Dr. Audeh is survived by his beloved wife of 67 years, Sameera; his daughter, Prof. Aida Audeh of Hamline University in St. Paul and son-in-law Giovanni; his son, Dr. M(ouni) William Audeh and daughter-in-law Carolina of Los Angeles, and grandson, Brandon William Audeh of Santa Monica, Calif. Dr. Audeh is also survived by his brother, Dr. Costandi Audeh and his wife Margaret, of Phoenix, Ariz. Dr. Audeh was preceded in death by his beloved sisters, Alice, Aida and Hilda; and by his parents, Dr. Amin and Olga Audeh, with whom he will be laid to rest in the St. George Orthodox Cemetery. In lieu of flowers, the family requests donations be made to the Palestinian Children's Relief Fund (www.pcrf.net), a medical charity providing medical care to Palestinian children. Online condolences may be left at http://www.cedarmemorial.com under obituaries.

Read the original here:

Dr. William Audeh - The Gazette

‘Friends’ Star Matthew Perry Dated Julia Roberts By Wooing Her With Quantum Physics and Funny Jokes – Showbiz Cheat Sheet

Matthew Perry played one of the funniest characters on Friends. He excelled at deadpan, self-deprecating humor that truly made Chandler Bing a delight. And as it turns out, that sense of silliness extended to his life beyond the set.

Perry was relatively unknown in Hollywood before getting cast in the iconic NBC sitcom. Playing Chandler helped catapult the actor to fame, and the role even led to a few high profile romances with fellow Hollywood stars.

The Friends alum even hooked up with A-lister Julia Roberts, and it was all thanks to funny jokes and a bit of quantum physics.

Before his role of Chandler made Perry a superstar, the actor struggled with feelings of awkwardness in his dating life.

During the early stages of Friends he told show creators that while he was not an unattractive man, he did feel just awful with women, InTouch reported.

I also am not comfortable with any silence at all, Perry said during a 2004 Dateline interview. I have to break any awkward moment or silence with a joke.

Eventually, Perrys jovial nature attracted multiple romantic partners. Goofing around even led to a long term relationship with Roberts.

RELATED: Friends: This 1 Famous Co-Star Was Super Nervous to Appear on the Series I Remember Losing Sleep

By 1996, Friends was one of the hottest comedies on television. NBC execs decided to capitalize on that popularity by airing a special 2-part episode immediately following the Super Bowl game that year. And they knew they needed some special celebrity guest stars to make it even more enticing.

Getting Julia Roberts was incredibly exciting. We knew she would have the right touch for it. And when she said yes, it was pretty awesome, series co-creator Marta Kauffman told The Hollywood Reporter.

Producer Kevin S. Bright followed up with a funny story about how that happened. Do you know the story of how we got her? Matthew (Perry) asked her to be on the show, Bright recalled.

She wrote back to him, Write me a paper on quantum physics and Ill do it. My understanding is that Matthew went away and wrote a paper and faxed it to her the next day.

Roberts played Perrys love interest in the Friends episode titled The One After the Super Bowl. And their playful banter didnt end there. After filming wrapped, the two actors stayed in touch and eventually began a romantic relationship.

There was a lot of flirting over faxing, writer Alexa Junge told THR. She was giving him these questionnaires like, Why should I go out with you? And everyone in the writers room helped him explain to her why. He could do pretty well without us, but there was no question we were on Team Matthew and trying to make it happen for him.

Perry and Roberts dated for about one year, doing their best to keep the relationship hidden from nosy tabloid reporters. And it was all thanks to Perrys sense of humor channeled through Chandler.

The rest is here:

'Friends' Star Matthew Perry Dated Julia Roberts By Wooing Her With Quantum Physics and Funny Jokes - Showbiz Cheat Sheet

A world-first method to enable quantum optical circuits that use photons – Tech Explorist

Till 2025, the collective sum of the worlds data will grow from 33 zettabytes this year to a 175ZB by 2025. The security and privacy of such sensitive data remain a big concern.

Emerging quantum communication and the latest computation technologies offer a promising solution. However, it requires powerful quantum optical circuits that can securely process the massive amounts of information we generate every day.

To help enable this technology, scientists in USCs Mork Family Department of Chemical Engineering and Materials Science have made a breakthrough in quantum photonics.

A quantum optical circuit uses light sources to generate photons on-demand in real-time. The photons act as information-carrying bits (qubits).

These light sources are nano-sized semiconductor quantum dotstiny manufactured collections of tens of thousands to a million atoms packed within a volume of linear size less than a thousandth of the thickness of typical human hair buried in a matrix of another suitable semiconductor.

They have so far been demonstrated to be the most flexible on-demand single-photon generators. The optical circuit requires these single-photon sources to be masterminded on a semiconductor chip. Photons with an almost identical wavelength from the sources should then be delivered a guided way. This permits them to be controlled to shape collaborations with different photons and particles to transmit and process information.

Until now, there has been a significant barrier to the development of such circuits. The dots have different sizes, and shapes mean that the photons they release do not have uniform wavelengths. This and the lack of positional order make them unsuitable for use in the development of optical circuits.

In this study, scientists showed that single photons could be emitted uniformly from quantum dots arranged precisely. Scientists used the method of aligning quantum dots to create single-quantum dot, with their remarkable single-photon emission characteristics.

It is expected that the ability to align uniformly-emitting quantum dots precisely will enable the production of optical circuits, potentially leading to novel advancements in quantum computing and communications technologies.

Jiefei Zhang, currently a research assistant professor in the Mork Family Department of Chemical Engineering and Materials Science, said,The breakthrough paves the way to the next steps required to move from lab demonstration of single-photon physics to chip-scale fabrication of quantum photonic circuits. This has potential applications in quantum (secure) communication, imaging, sensing, and quantum simulations and computation.

The corresponding author Anupam Madhukar said,it is essential that quantum dots be ordered in a precise way so that photons released from any two or more dots can be manipulated to connect on the chip. This will form the basis of building unit for quantum optical circuits.

If the source where the photons come from is randomly located, this cant be made to happen.

The current technology that allows us to communicate online, for instance using a technological platform such as Zoom, is based on the silicon integrated electronic chip. If the transistors on that chip are not placed in exact designed locations, there would be no integrated electrical circuit. It is the same requirement for photon sources such as quantum dots to create quantum optical circuits.

Evan Runnerstrom, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Development Commands Army Research Laboratory, said,This advance is an important example of how fundamental solving materials science challenges, like how to create quantum dots with precise position and composition, can have big downstream implications for technologies like quantum computing. This shows how AROs targeted investments in basic research support the Armys enduring modernization efforts in areas like networking.

Using a method called SESRE (substrate-encoded size-reducing epitaxy), scientists created a precise layout of quantum dots for the circuits. They then fabricated regular arrays of nanometer-sized mesas with a defined edge orientation, shape, and depth on a flat semiconductor substrate composed of gallium arsenide (GaAs). Quantum dots are then created on top of the mesas by adding appropriate atoms using the following technique.

Zhang said,This work also sets a new world-record of ordered and scalable quantum dots in terms of the simultaneous purity of single-photon emission greater than 99.5%, and in terms of the uniformity of the wavelength of the emitted photons, which can be as narrow as 1.8nm, which is a factor of 20 to 40 better than typical quantum dots.

That with this uniformity, it becomes feasible to apply established methods such as local heating or electric fields to fine-tune the photon wavelengths of the quantum dots to exactly match each other, which is necessary for creating the required interconnections between different quantum dots for circuits.

We now have an approach and a material platform to provide scalably and ordered sources generating potentially indistinguishable single-photons for quantum information applications. The approach is general and can be used for other suitable material combinations to create quantum dots emitting over a wide range of wavelengths preferred for different applications, for example, fiber-based optical communication or the mid-infrared regime, suited for environmental monitoring and medical diagnostics.

The rest is here:

A world-first method to enable quantum optical circuits that use photons - Tech Explorist

The Super Bowl: What is time? – SB Nation

What time is the Super Bowl? Super Bowl LV will be played on Feb. 7, 2021, at the Raymond James Stadium in Tampa, Florida. The game, which will be contested by the AFCs Kansas City Chiefs and the NFCs Tampa Bay Buccaneers, will kick off at 6:30 p.m. ET (5:30 p.m. CT; 3:30 p.m. PT). In the United States, you can watch on CBS. The Super Bowl LV Halftime Show will be headlined by The Weeknd.

Time is a notoriously hard concept to pin down. The first person Im aware of to take a serious crack at it is Aristotle, who offers up his definition of time in Book IV of his Physics (parts 10 and 11): time is number of movement in respect of the before and after.

This curiously circular definition citing the before and after in a definition of time somewhat evades the issue at hand is a relative one. For Aristotle (who seems to be more interested in playing with the concept of now than time itself anyway), time seems to be a sort of basis for change, although its exact nature is confusing, both for the philosopher himself and anyone unfortunate enough to be attempting to thoroughly digest his work.

Relative time turns out not to be that useful, and one of the great achievements of early modernity was capturing and taming time. The development of regular clocks allows for many things, not least more precise measurements of everything else. Its impossible to imagine, for instance, the grand edifice of Newtonian physics being built on water clocks and sundials.

Thanks to the magic of clocks, these days were used to time as a constant, the now ticking second by second into the future with implacable rhythm. This is very helpful both in being able to understand the immediate universe and to maintain a functioning society. But its also both physically wrong and hideously unnatural.

Einstein dealt with the non-static nature of time in 1905 with On the Electrodynamics of Moving Bodies (tl;dr: holding the speed of light constant means that time must flow differently for observers traveling at different speeds, a fact which has been proved experimentally) and although Im not drunk enough to read about quantum physics Im going to go ahead and assume that quantum theories of time are pretty gnarly too. Humanitys general concept of time only works on a limited, parochial scale.

And honestly, it doesnt work there either. We dont experience time in the way our machines do. It contracts, contorts, extending March 2020 into hideous decades, and turning what ought to be joyous hours into a stumble of drunken seconds. People dont experience time as a regimented flow. Sometimes we pretend to, and every now and then we force ourselves to sync up with it, but formal time is too abstract for us to stay with it for long.

Sports are great examples of our utter inability to mesh perceived time with real time. Since its Super Bowl week, lets take football. An NFL game nominally consists of four 15-minute quarters, but in practice lasts for hours. Why? Because the game clock twists and turns, freezing at some points but not others. The rules of football, with play clocks and timeouts and etc. act as a supplemental set of physics, but its not just the rules which determine how long a game goes.

On average the Super Bowl takes 3 hours, 44 minutes from start to finish. This is significantly longer than NFL average, despite no changes to the rules of the sport, and is entirely a product of capitalisms bizarre intersection with cultural events. Super Bowl ads cost a lot of money, the halftime show and therefore more time needs to be made (made?!) to accommodate both. In a certain sense, then, the societal environs of the game warp time within the game.

The Super Bowl, then operates on about three different layers of time, all distressed in barely-sensical ways. This is fine, because time makes no fucking sense and never will. Things happen, they appear to be irreversible because of ... entropy? ... and we all just hang around and deal with it. What is time? Honestly, I havent the foggiest idea. Maybe the post my friend Chris Greenberg wrote three years ago might help?

Ive given myself a headache now, so I hope this post is long enough for SEO purposes. Dulce et decorum est pro google mori.

Read the original:

The Super Bowl: What is time? - SB Nation