Page 49«..1020..48495051..6070..»

Category Archives: Quantum Physics

Quantum computing skills are hard to find. Here’s how companies are tackling the shortage – ZDNet

Posted: November 13, 2021 at 10:56 am

Quantum computing has the potential to fundamentally transform the technology industry by applying the weird effects of the quantum realm to complex business problems. But right now, quantum computing faces a more mundane problem itself: finding enough recruits.

Demand for digital skills in the workplace has been on a steady upward trend for years, but the sudden increased reliance on technology since the start of 2020 has made competition in tech recruitment even more fierce.

The CIO's guide to Quantum computing

Quantum computers offer great promise for cryptography and optimization problems, and companies are racing to make them practical for business use. ZDNet explores what quantum computers will and wont be able to do, and the challenges that remain.

Read More

The challenge is even greater for organizations dealing in highly specialized technologies. Quantum computing, for example, combines a variety of specialist fields such as quantum theory, advanced mathematics, and computer science that aren't seen on your typical CV, shrinking the talent pool considerably for companies looking to hire in this nascent, but increasingly competitive, industry.

SEE:Quantum computing's next big challenge: A quantum skills shortage

"It is incredibly small," says Samantha Edmondson, head of talent at British quantum computing startup, Universal Quantum, which is on a mission to build the world's first million-qubit quantum computer.

"Say if we were looking to hire an experienced quantum physicist that had the kind of expertise we needed, then yes, you're looking at a small handful of academic groups across the world that you can really pick from."

Quantum computers operate on inherently different principles to classical computers, requiring a new approach to problem-solving and a workforce consisting of academic, technical, and businesses expertise.

No one candidate is going to possess all of these. "It involves so many different skills: we need classical hardware engineers, we need software engineers, we need mathematicians, we need simulation and modelling experts," says Edmondson.

"I think the challenge for us is, if we go to hire a classical engineer, they don't have the physics background; if we hire a physicist, they're not used to working with classical hardware engineering analogue design is new to them."

Another fundamental challenge for businesses is getting people interested in technical fields to begin with.

Not only are fewer young people taking IT and STEM-related subjects at school, but research also suggests that younger generations aren't all too confident about their chances of landing a career in tech either.

Robert Liscouski, CEO of Quantum Computing Inc (QCI), says this is reflective of endemic problems in how young people are educated, which doesn't necessarily include skills that are transferrable into the modern, professional workforce. "I think we're not doing a very good job at all of preparing young people for these technology jobs," he tells ZDNet.

"I think we still have this 19th Century education going on that's really focused on educating children so they can work in factories."

Better education, meanwhile, remains out of reach for most. "Where I live in Northern Virginia, we have a couple of academies that are geared for really advanced education in the secondary school and high schoolThe admission requirements in those programmes are so competitive that kids need to be at the absolute top of their game," says Liscouski.

"That's great you want that advanced thinking. But we need to figure out how we kind of bring that into the entire high school system and inculcate these kids into thinking about technology differently."

One solution for the shortage of specialist tech talent is for employers to bring on employees that are not necessarily already experts in the field, and then train them up on the job.

For a field like quantum computing, this still means being selective in the candidates you can hire higher-level education and expertise in mathematics, physics, engineering, and coding are always going to rank highly, for instance. Even so, internships and training programs can help to lower the barriers to entry.

Universal Quantum runs a three-month internship scheme that's open to graduates who hold a master's in physics or mathematics. Typically, interns take on a specific project that they are given total responsibility for, with Universal Quantum providing support through one-on-one mentoring and drop-in sessions with quantum physicists.

SEE:What is quantum computing? Everything you need to know about the strange world of quantum computers

The internship culminates in them presenting their work to a large section of the company. "Typically, we'll speak to them at the beginning and get a sense of what their interests are, and then we'll match that to a company need we have," says Edmondson.

"They'll often say, 'I don't know anything about quantum' or 'I've never worked in quantum,' and we have to reassure them and say 'that's completely fine, we're happy to teach you that when you come here.' That's quite exciting to them."

Liscouski too believes that deep quantum expertise isn't necessarily a requirement for enterprises to begin taking advantage of quantum computing, although he acknowledges that not all companies have the resources to offer comprehensive training programmes. "It's very hard for small companies and it's very hard for medium-sized companies because you don't have that luxury of taking 10% of your workforce out and putting them in training for a period of time," he says.

"Typically, you hire people because you need them now, not because you need them in six months."

One alternative is to target students at university, college, or even school: something that QCI previously offered with its quantum computing clubs, where participants learn to use the company's software, Qatalyst.

"We're moving into actually the academic instructional program, where professors are using our software as part of their curriculum, and we've got a whole curriculum development programme for that," says Liscouski.

"We're trying to push this down to the lowest common denominator in terms of who can access it. We're even trying to get into high schools to help that workforce development."

Qatalyst is a quantum application accelerator that enables end users to transform real-world problems into quantum-ready requests, and then it processes those requests on a combination of classical computers and cloud-based quantum processors, including Ion-Q, D-Wave, and Rigetti.

SEE:Quantum computing: Getting it ready for business

It enables businesses to make use of quantum applications without needing to have their own quantum computers or specialists.

"It's intended to try to put that technology in the hands of folks who are trying to solve business problems without having to be quantum programmers," says Liscouski.

"Our focus on our platform and the development that we've done to connect to any number of quantum platforms, is to disintermediate, or de-emphasise, the need for this high-end talent that's going to make a program run on a quantum computer."

In many ways, QCI proposes a technical solution to a shortage of specialist skills -- although Liscouski acknowledges that technology on its own is not the be-all to end-all. "We still have this shortcoming of all of this talent that's going to make this stuff work at scale," he adds.

"Quantum programmes are different than classical programmes. The way you look at a problem classically is different to the way you look at a problem from a quantum point of viewThinking about those problems requires a different level of thinking than classical computers."

Given the scant interest in technology careers shown by Generation Z, outreach is going to play a significant role in putting burgeoning, next-generation technologies like quantum computing on their radars undoubtedly the first step to addressing any skills gaps.

Edmondson says tech organizations need to become involved in attracting young people at a grassroots level within schools, as well as getting more creative in how they portray opportunities in the tech sector. "It's definitely a responsibility of businesses to try to nurture the talent pool coming forward and undertake outreach that will assist with that and that's just getting young people excited about things," she says.

SEE:Tech jobs have an image problem, and it's making the skills shortage worse

"We set up a lab in Spitalfields Market in London in a huge shipping container and were giving live demonstrations and experiments. People would come in and we'd talk to them about what we were doing and get them excited. That's relatively small-scale right now, but if somebody goes away and because of that becomes excited to learn something or do a new subject, that's a win."

Liscouski says that exposure to new technologies from an early age will also play an important role in equipping the next-generation workforce with key digital skills and have them working on real-world problems. "I think there has to be either post-high school training capability, or post-college training capability, or colleges have to extend and think more broadly about what they're preparing students to do," he adds.

"Because, at the end of the day, quantum computing like any computer that we know of unless there is end-user adoption, unless there is a focus on what problems can be solved, it becomes a science experiment and is just going to stay in the research world."

Read the original:

Quantum computing skills are hard to find. Here's how companies are tackling the shortage - ZDNet

Posted in Quantum Physics | Comments Off on Quantum computing skills are hard to find. Here’s how companies are tackling the shortage – ZDNet

Breakthrough Smoking Gun Discovery in Power Consumption in Electronic Devices – SciTechDaily

Posted: at 10:56 am

In a new FLEET theoretical study published recently in PhysicalReview Letters, the so called smoking gun in the search for the topological magnetic monopole also known as the Berry curvature has been found.

The discovery is a breakthrough in the search for topological effects in non-equilibrium systems.

The group, led by UNSW physicist and Associate Professor, Dimi Culcer, identified an unconventional Hall effect, driven by an in-plane magnetic field in semiconductor hole systems that can be traced exclusively to the Berry curvature.

Enhanced topological effects will permit low-energy topological electronics to be viable for large-scale, room-temperature operation, and therefore support the IEEE roadmap towards future electronics sustainability.

Isolating topological responses in regular conductors has been a historically difficult task, says research team leader A/Prof Dimi Culcer (UNSW). Even though these topological responses are believed to be ubiquitous in solids.

Quantized responses, such as the quantum Hall and quantum spin-Hall effects provide a clear fingerprint of topology, yet these have only been observed in one-dimensional (1D) systems and are intimately connected with the existence of edge states.

An experimental set-up for measuring conventional Hall effect with magnetic field perpendicular to the surface. Credit: FLEET

In `regular conductors, meaning 2D and 3D systems, plenty of theoretical literature exists predicting topological contributions to e.g. the anomalous Hall effect, but these have never been observed unambiguously in a transport measurement.

There are two main reasons for this: (i) spin-up and spin-down electrons usually make opposite contributions, and these nearly cancel out; (ii) whatever is left is overwhelmed by disorder.

The new FLEET paper remedies this long-standing shortcoming by identifying a two-dimensional system in which the Berry curvature, and only the Berry curvature, is responsible for the Hall signal linear in the applied in-plane magnetic field.

Remarkably, all disorder contributions vanish: we are not aware of any other multi-dimensional system in which this is true, says lead author, UNSW PhD student James Cullen. Its experimental measurement is accessible to any state-of-the-art laboratory worldwide, hence we expect strong interest from experimentalists.

The research team sought the tell-tale mathematical trace called Berry curvature, which can be understood if we think of the concept of parallel transport that appears routinely in geometry and general relativity.

Think of a vector as an arrow that we place somewhere on the surface of a solid object, explains Dimi. Now we move the arrow around, making sure it always points at the same angle to the surface this is in fact like a human being walking along the surface of the Earth. We eventually bring the arrow back to the starting point after it has circled around, and we find that, in general, it points in a different direction it has magically rotated through some angle. The size of this angle is determined by the curvature of the surface.

Hall conductivity response to magnetic field. Credit: FLEET

In quantum mechanics, instead of vectors we have wave functions, but we can describe the dynamics using the same picture, and the curvature is called the Berry curvature.

The angle of rotation is replaced by the famous Berry phase, named after the mathematical physicist Prof Sir Michael Berry, who formulated the problem in the 1980s. Later on, building on work by Nobel laureate David Thouless, Qian Niu of UT Austin showed that the Berry curvature behaves like the coveted magnetic monopolebut not in real space, rather in momentum space, which is the space most condensed-matter physicists think in.

The Berry curvature drives topological effects in out-of-equilibrium systems because when an electric field is applied an electron is accelerated, so its momentum changes. When this happens its wave function changes slowly, in the same way that the `arrow is rotated in parallel transport, and as a result of this gradual rotation a transverse (Hall) current is generated. The Onsager relations, which are fundamental to non-equilibrium physics, say that the Hall current does not dissipate energy. The extreme case is the quantum anomalous Hall effect (QAHE), a quantum effect key to the function of topological materials, in which edge currents can flow with effectively zero electrical resistance.

(Quantum describes step transition in the transverse (Hall) resistance ie, it varies in discrete steps rather than smoothlywhile anomalous refers to the phenomenons occurrence in the absence of any applied magnetic field.)

Researchers seek to enhance QAHE in order to protect topological behaviour at higher temperatures, allowing for topological electronics that would be viable for room-temperature operation.

The significant reduction in electrical resistance permitted by room temperature QAHE would allow us to significantly reduce the power consumption in electronic devices, says Dimi.

Reference: Generating a topological anomalous Hall effect in a non-magnetic conductor: an in-plane magnetic field as a direct probe of the Berry curvature by James H. Cullen, Pankaj Bhalla, E. Marcellina, A.R. Hamilton and Dimitrie Culcer, 21 June 2021, Physical Review Letters.DOI 10.1103/PhysRevLett.126.256601

As well as support from theAustralian Research Council(Centres of Excellence program) the authors acknowledge the support of the National Key Research and Development Program (China), and the China Postdoctoral Science Foundation.

Read the original here:

Breakthrough Smoking Gun Discovery in Power Consumption in Electronic Devices - SciTechDaily

Posted in Quantum Physics | Comments Off on Breakthrough Smoking Gun Discovery in Power Consumption in Electronic Devices – SciTechDaily

Clever Combination of Quantum Physics and Molecular Biology – SciTechDaily

Posted: November 9, 2021 at 1:44 pm

Illustration of a quantum wave packet in close vicinity of a conical intersection between two potential energy surfaces. The wave packet represents the collective motion of multiple atoms in the photoactive yellow protein. A part of the wave packet moves through the intersection from one potential energy surface to the other, while another part remains on the top surface, leading to a superposition of quantum states. Credit: DESY, Niels Breckwoldt

A new analytical technique is able to provide hitherto unattainable insights into the extremely rapid dynamics of biomolecules. The team of developers, led by Abbas Ourmazd from the University of WisconsinMilwaukee and Robin Santra from DESY, is presenting its clever combination of quantum physics and molecular biology in the scientific journal Nature. The scientists used the technique to track the way in which the photoactive yellow protein (PYP) undergoes changes in its structure in less than a trillionth of a second after being excited by light.

In order to precisely understand biochemical processes in nature, such as photosynthesis in certain bacteria, it is important to know the detailed sequence of events, Santra explains their underlying motivation. When light strikes photoactive proteins, their spatial structure is altered, and this structural change determines what role a protein takes on in nature. Until now, however, it has been almost impossible to track the exact sequence in which structural changes occur. Only the initial and final states of a molecule before and after a reaction can be determined and interpreted in theoretical terms. But we dont know exactly how the energy and shape changes in between the two, says Santra. Its like seeing that someone has folded their hands, but you cant see them interlacing their fingers to do so.

Whereas a hand is large enough and the movement is slow enough for us to follow it with our eyes, things are not that easy when looking at molecules. The energy state of a molecule can be determined with great precision using spectroscopy; and bright X-rays for example from an X-ray laser can be used to analyze the shape of a molecule. The extremely short wavelength of X-rays means that they can resolve very small spatial structures, such as the positions of the atoms within a molecule. However, the result is not an image like a photograph, but instead a characteristic interference pattern, which can be used to deduce the spatial structure that created it.

Since the movements are extremely rapid at the molecular level, the scientists have to use extremely short X-ray pulses to prevent the image from being blurred. It was only with the advent of X-ray lasers that it became possible to produce sufficiently bright and short X-ray pulses to capture these dynamics. However, since molecular dynamics takes place in the realm of quantum physics where the laws of physics deviate from our everyday experience, the measurements can only be interpreted with the help of a quantum-physical analysis.

A peculiar feature of photoactive proteins needs to be taken into consideration: the incident light excites their electron shell to enter a higher quantum state, and this causes an initial change in the shape of the molecule. This change in shape can in turn result in the excited and ground quantum states overlapping each other. In the resulting quantum jump, the excited state reverts to the ground state, whereby the shape of the molecule initially remains unchanged. The conical intersection between the quantum states therefore opens a pathway to a new spatial structure of the protein in the quantum mechanical ground state.

The team led by Santra and Ourmazd has now succeeded for the first time in unraveling the structural dynamics of a photoactive protein at such a conical intersection. They did so by drawing on machine learning because a full description of the dynamics would in fact require every possible movement of all the particles involved to be considered. This quickly leads to unmanageable equations that cannot be solved.

The photoactive yellow protein we studied consists of some 2000 atoms, explains Santra, who is a Lead Scientist at DESY and a professor of physics at Universitt Hamburg. Since every atom is basically free to move in all three spatial dimensions, there are a total of 6000 options for movement. That leads to a quantum mechanical equation with 6000 dimensions which even the most powerful computers today are unable to solve.

However, computer analyses based on machine learning were able to identify patterns in the collective movement of the atoms in the complex molecule. Its like when a hand moves: there, too, we dont look at each atom individually, but at their collective movement, explains Santra. Unlike a hand, where the possibilities for collective movement are obvious, these options are not as easy to identify in the atoms of a molecule. However, using this technique, the computer was able to reduce the approximately 6000 dimensions to four. By demonstrating this new method, Santras team was also able to characterize a conical intersection of quantum states in a complex molecule made up of thousands of atoms for the first time.

The detailed calculation shows how this conical intersection forms in four-dimensional space and how the photoactive yellow protein drops through it back to its initial state after being excited by light. The scientists can now describe this process in steps of a few dozen femtoseconds (quadrillionths of a second) and thus advance the understanding of photoactive processes. As a result, quantum physics is providing new insights into a biological system, and biology is providing new ideas for quantum mechanical methodology, says Santra, who is also a member of the Hamburg Cluster of Excellence CUI: Advanced Imaging of Matter. The two fields are cross-fertilizing each other in the process.

Reference: Few-fs resolution of a photoactive protein traversing a conical intersection by A. Hosseinizadeh, N. Breckwoldt, R. Fung, R. Sepehr, M. Schmidt, P. Schwander, R. Santra and A. Ourmazd, 3 November 2021, Nature.DOI: 10.1038/s41586-021-04050-9

Read more:

Clever Combination of Quantum Physics and Molecular Biology - SciTechDaily

Posted in Quantum Physics | Comments Off on Clever Combination of Quantum Physics and Molecular Biology – SciTechDaily

The quantum experiment that could prove reality doesnt exist – New Scientist

Posted: at 1:44 pm

We like to think that things are there even when we aren't looking at them. But that belief might soon be overturned thanks to a new test designed to tell us if quantum weirdness persists in macroscopic objects

By Thomas Lewton

Olena Sergienko/Unsplash

THERE is an old philosophy question about a tree in a forest. If it falls with nobody there to hear it, does it make a sound? Ask a quantum physicist and they might say the sound was there but you couldnt be sure the tree was.

Quantum mechanics has long pushed the boundaries of our understanding of reality at its tiniest. Countless experiments have shown that particles spread out like waves, for instance, or seem be in more than one place at once. In the quantum world, we can only know the likelihood that something will appear in one place or another until we look, at which point it assumes a definite position. This troubled Albert Einstein. I like to think that the moon is there even if I am not looking at it, he said.

Now, a new class of experiments is putting Einsteins conviction to the test, seeing if quantum weirdness stretches beyond the tiny world of quarks, atoms and qubits into the everyday world of tables, chairs and, well, moons. If you can go from one atom to two atoms to three to four to five to a thousand, is there any reason why it stops? says Jonathan Halliwell at Imperial College London.

These experiments are not just investigating whether there is a hard boundary between the quantum and classical worlds, but also probing the true nature of reality. If the work goes as some theorists expect, it might just kick the legs out from under one of our most firmly held beliefs: that things exist regardless of whether we are looking at them.

Read the original here:

The quantum experiment that could prove reality doesnt exist - New Scientist

Posted in Quantum Physics | Comments Off on The quantum experiment that could prove reality doesnt exist – New Scientist

Lost in Space-Time newsletter: Will a twisted universe save cosmology? – New Scientist

Posted: at 1:44 pm

By Richard Webb

Albert Einsteins general theory of relativity didnt have to be

Hello, and welcome to Novembers Lost in Space-Time, the monthly physics newsletter that unpicks the fabric of the universe and attempts to stitch it back together in a slightly different way. To receive this free, monthly newsletter in your inbox, sign up here.

Theres a kind of inevitability about the fact that, if you write a regular newsletter about fundamental physics, youll regularly find yourself banging on about Albert Einstein. As much as it comes with the job, I also make no apology for it: he is a towering figure in the history of not just fundamental physics, but science generally.

A point that historians of science sometimes make about his most monumental achievement, the general theory of relativity, is that, pretty much uniquely, it was a theory that didnt have to be. When you look at the origins of something like Charles Darwins theory of evolution by natural selection, for example not to diminish his magisterial accomplishment in any way youll find that other people had been scratching around similar ideas surrounding the origin and change of species for some time as a response to the burgeoning fossil record, among other discoveries.

Even Einsteins special relativity, the precursor to general relativity that first introduced the idea of warping space and time, responded to a clear need (first distinctly identified with the advent of James Clerk Maxwells laws of electromagnetism in the 1860s) to explain why the speed of light appeared to be an absolute constant.

When Einstein presented general relativity to the world in 1915, there was nothing like that. We had a perfectly good working theory of gravity, the one developed by Isaac Newton more than two centuries earlier. True, there was a tiny problem in that it couldnt explain some small wobbles in the orbit of Mercury, but they werent of the size that demanded we tear up our whole understanding of space, time, matter and the relationship between them. But pretty much everything we know (and dont know) about the wider universe today stems from general relativity: the expanding big bang universe and the standard model of cosmology, dark matter and energy, black holes, gravitational waves, you name it.

So whyamI banging on about this? Principally because, boy, do we need a new idea in cosmology now and in a weird twist of history, it might just be Einstein who supplies it. Im talking about anintriguing feature by astrophysicist Paul M. Sutter in the magazine last month. It deals with perhaps general relativitys greatest (perceived, at least) weakness the way it doesnt mesh with other bits of physics, which are all explained by quantum theory these days. The mismatch exercised Einstein a great deal, and he spent much of his later years engaged in a fruitless quest to unify all of physics.

Perhaps his most promising attempt came with a twist literally on general relativity that Einstein played about with early on. By developing a mathematical language not just for how space-time bends (which is the basis of how gravity is created within relativity) but for how it twists, he hoped to create a theory that also explained the electromagnetic force. He succeeded in the first bit, creating a description of how massive, charged objects might twist space-time into mini-cyclones around them. But it didnt create a convincing description of electromagnetism, and Einstein quietly dropped the theory.

Well, the really exciting bit, as Sutter describes, is that this teleparallel gravity seems to be back in a big way. Many cosmologists now think it could be a silver bullet to explain away some of the most mysterious features of todays universe, such as thenature of dark matterand dark energy and thetroublesome period of faster-than-light inflationright at the moment of the big bang that is invoked to explain features of todays universe, such as its extraordinary smoothness. Not only that, but there could be a way to test the theory soon. Id recommendreading the featureto get all the details, but in the meantime, its about as exciting a development as youll get in cosmology these days.

Lets take just a quick dip into the physics arXiv preprint server, where the latest research is put up. One paper that caught my eye recently has the inviting title Life, the universe and the hidden meaning of everything . Its by Zhi-Wei Wang at the College of Physics in China and Samuel L. Braunstein at the University of York in the UK, and it deals with a question thats been bugging a lot of physicists and cosmologists ever since we started making detailed measurements of the universe and developing cogent theories to explain what we see: why does everything in the universe (the strengths of the various forces, the masses of fundamental particles, etc.) seem so perfectly tuned to allow the existence of observers like us to ask the question?

This has tended to take cosmologists and physicists down one of two avenues. The first says things are how they are because thats how theyre made. For some, that sails very close to an argument via intelligent design, aka the existence of god. The other avenue tends to be some form of multiverse argument: our universe is as it is because we are here to observe it (we could hardly be here to observe it if it werent), but it is one of a random subset of many possible universes that happen to be conducive to intelligent life arising.

This paper examines more closely a hypothesis from British physicist Dennis Sciama (doctoral supervisor to the stars: among his students in the 1960s and 1970s wereStephen Hawking, quantum computing pioneer David Deutsch and the UKs astronomer royal, Martin Rees ) that if ours were a random universe, there would be a statistical pattern in its fundamental parameters that would give us evidence of that. In this paper, the researchers argue that the logic is actually reversed. In their words: Were our universe random, it could give the false impression of being intelligently designed, with the fundamental constants appearing to be fine-tuned to a strong probability for life to emerge and be maintained.

Full disclosure Im writing something on this very subject for New Scientists 65th-anniversary issue, due out on 20 November. Read more there!

While Im banging on about Einstein, I stumbled across one of my favourite features Ive worked on while at the magazine the other day, and thought it was worth sharing. Called Reality check: Closing the quantum loopholes, its from 2011, a full 10 years ago, but the idea it deals with stretches back way before that and is still a very live one.

The basic question is: is quantum theory a true description of reality, or are its various weirdnesses not least the entanglement of quantum objects over vast distances indications of goings-on in an underlying layer of reality not described by quantum theory (or indeed any other theory to date)? I talked about entanglement quite a bit in last months newsletter, so I wont go into its workings here.

The alternative idea of hidden variables explaining the workings of the quantum world goes back to a famous paper published by Einstein and two collaborators, Nathan Rosen and Boris Podolsky, back in 1935. It led Einstein into a long-drawn-out debate about the nature of quantum theory with another of its pioneers, Niels Bohr, that continued decorously right until Einsteins death in 1955. It wasnt until the 1980s that we began to have the theoretical and experimental capabilities to actually pit the two pictures against one another.

The observatories atop the volcano Teide on Tenerife were one scene of a bold test of quantum reality.

Phil Crean A/ Alamy

I love the story not just for this rich history, but also for the way that, after each iteration of the experiments every time showing that quantum theory, and entanglement, are the right explanation for what is going on, whatever they might mean the physicists found another loophole in the experiments that might allow Einsteins hidden variable idea back into the frame again.

That led them to some pretty impressive feats of experimental derring-do to close the loopholes again the feature opens with a group of modern physicists shooting single photons between observatories on Tenerife and La Palma in the Canary Islands. In an update to the story that we published in 2018 (with the rather explicit titleEinstein was wrong: Why normal physics cant explain reality), they even reproduced the result with photons coming at us from galaxies billions of light years away proving that, if not the whole universe, then a goodly proportion of it follows quantum rules. You cant win em all, Einstein.

One reason Ive been thinking particularly frequently about Einstein and his work lately is that Ive been putting together the latestNew Scientist Essential Guidecalled Einsteins Universe. Its a survey of his theories of relativity and all those things that came out of it: the big bang universe and the standard model of cosmology, dark matter and energy, gravitational waves, black holes and, of course,the search for that elusive unifying theory of physics. Ive just putting the finishing touches to theEssential Guidewith my left hand as I type this, and I think its a fair expectation that youll find me banging on about that (and Einstein) a lot more next month.

1. Talking of fine-tuned universes, if you havent done so already, you can still catch up with Brian Cleggs New Scientist Event talk,The Patterns That Explain the Universe, from last month, available on demand.

2. If youre fan of big ideas (I hope thats why youre here) and like casting your net a little wider than just physics, then a ticket to ourBig Thinkers series of live events gives you access to 10 talks from top researchers from across the board, including Harvard astronomer Avi Loeb on the search for extraterrestrial life and Michelle Simmons and John Martinis on quantum computing.

3. It happened just after my last newsletter, but it would be remiss not to mention the awarding of this years Nobel prize to three researchers who played a leading role in advancing our understanding of chaotic systems notably the climate. You can find out more about what they didhere.

More on these topics:

See the rest here:

Lost in Space-Time newsletter: Will a twisted universe save cosmology? - New Scientist

Posted in Quantum Physics | Comments Off on Lost in Space-Time newsletter: Will a twisted universe save cosmology? – New Scientist

From the atom bomb to quantum physics: how John Von Neumann changed the world – Telegraph.co.uk

Posted: at 1:44 pm

In embarking on his biography of von Neumann, Bhattacharya sets himself a considerable challenge: writing about a man who, through crisis after crisis, through stormy intellectual disagreements and amid political controversy, contrived always, for his own sake and others, to avoid unnecessary drama. Whats a biographer to do, when part of his subjects genius is his ability to blend in with his friends and lead a good life? How to dramatise a man without flaws, who skates through life without any of the personal turmoil that makes for gripping storytelling?

If some lives resist the storytellers art, Bhattacharya does a cracking job of hiding the fact. He sensibly, and ably, moves the biographical goal-posts, making this not so much the story of a flesh-and-blood man, more the story of how an intellect evolves, moving as intellects often do (though rarely so spectacularly) from theoretical concerns to their application to their philosophy. As he moved from pure mathematics to physics to economics to engineering, observed former colleague Freeman Dyson, [Von Neumann] became steadily less deep and steadily more important.

Von Neumann did not really trust humanity to live up, morally, to its technical capacities. What we are creating now, he told his wife, after a sleepless night contemplating an H-bomb design, is a monster whose influence is going to change history, provided there is any history left. He was a quintessentially European pessimist, forged by years that saw the world he had grown up in being utterly destroyed. It was no mere cynic, though, who wrote, We will be able to go into space way beyond the moon if only people could keep pace with what they create.

Bhattacharyas agile, intelligent, intellectually enraptured account of Von Neumanns life reveals, after all, not a man from the future, not a one-dimensional cold-war warrior and for sure not Dr Strangelove (though Peter Sellers nicked his accent). Bhattacharya argues convincingly that Von Neumann was a man in whose extraordinarily fertile head the pre-war world found a lifeboat.

The Man from the Future: The Visionary Life of John von Neumann is published by Allen Lane at 20. To order your copy for 16.99 call 0844 871 1514 or visit the Telegraph Bookshop

Read more:

From the atom bomb to quantum physics: how John Von Neumann changed the world - Telegraph.co.uk

Posted in Quantum Physics | Comments Off on From the atom bomb to quantum physics: how John Von Neumann changed the world – Telegraph.co.uk

Physics, Math, and Culture Wars – Splice Today

Posted: at 1:44 pm

Werner Heisenberg (19011976), central figure in quantum physics, gave lectures in the 1950s that were published a half-century later as a book of essays, Physics and Philosophy: The Revolution in Modern Science. The formulator of the uncertainty principle had some thoughts about what we now call culture wars. Modern physics, he said, is part of a general historical process that tends toward a unification and a widening of our present world. This process would in itself lead to a diminution of those cultural and political tensions that create the great danger of our time.

But it is accompanied, Heisenberg continued, by another process which acts in the opposite direction. The fact that great masses of people become conscious of this process of unification leads to an instigation of all forces in the existing cultural communities that try to ensure for their traditional values the largest possible role in the final state of unification. He went on to express a hope that ultimately many different cultural traditions may live together, but hed pointed out why getting there was so difficult: new technologies and communications were putting contrasting worldviews into direct competition for dominance.

That seems even more relevant now. My reading of it, though, is just happenstance. A friend sent the Heisenberg collection to me along with mathematician Edward Frenkels excellent book Love and Math after shed borrowed and lost my copy of the latter. Then I started perusing the Heisenberg book after watching Frenkel lecture on YouTube about whether infinity is real, a question also explored in Hannah Frys Magic Numbers, a series I recently watched. Infinity and Heisenbergs principle are both areas where the search for definite knowledge ran into obstacles in the 20th century.

If youd asked me, in the 1980s or 1990s, what the 2020s would be like, Id probably have said something about a Mars colony. But Heisenberg was insightful in recognizing the underlying processes whereby a society with any prospect of settling the red planet would also have plenty of red-faced contretemps on Earth.

Heisenberg also was astute in perceiving, at the Cold Wars height, that modern science penetrates into those large areas of the present world in which new doctrines were established only a few decades ago as foundations for new and powerful societies. He specified that he was talking about Communism, as modern science is confronted both with the content of the doctrines, which go back to European philosophical ideas of the nineteenth century (Hegel and Marx), and with the phenomenon of uncompromising belief.

Heisenberg: Since modern physics must play a great role in these countries because of its practical applicability, it can scarcely be avoided that the narrowness of the doctrines is felt by those who have really understood modern physics and its philosophical meaning. He continued that the influence of science should not be overrated; but it might be that the openness of modern science could make it easier even for larger groups of people to see that the doctrines are possibly not so important for the society as had been assumed before.

In Love and Math, Frenkel, who came of age in the Soviet Union in the 1980s, notes that communist ideology controlled intellectual pursuit in the spheres of the humanities, economics, and social sciences. He continues: Many areas of science were also dominated by the party line, for example with genetics having been banned for many years, because its findings were deemed to contradict the teachings of Marxism.

In this environment, Frenkel writes, mathematics and theoretical physics were oases of freedom. Though communist apparatchiks wanted to control every aspect of life, these areas were just too abstract and difficult for them to understand. Soviet leaders also realized the importance of these seemingly obscure and esoteric areas for the development of nuclear weapons, and thats why they didnt want to mess with these areas. In some ways, though, politics did intrude on math and physics education in the Soviet Union, as when Frenkel was barred from attending an elite school because of his partly Jewish ancestry.

The United States has long been the world leader in physics and math, but itd be complacent to assume those subjects well-shielded from our heated-up culture wars. Over the past decade, Common Core standards in math have been a political target, mostly from the right, with much parental resistance arising from a limited grasp of what the standards entail. Recently, gifted education, especially in math, has become a high-profile target for the left, with opportunities for students to tackle advanced topics seen as unfair and inequitable.

Heisenberg was not an optimist about humanity embracing reason. We cannot close our eyes, he lectured, to the fact that the great majority of the people can scarcely have any well-founded judgment concerning the correctness of certain important general ideas or doctrines. Therefore, the word belief can for this majority not mean perceiving the truth of something but can only be understood as taking this as the basis for life. One can easily understand that this second kind of belief is much firmer, is much more fixed than the first one, that it can persist even against immediate contradicting experience and can therefore not be shaken by added scientific knowledge.

Kenneth Silber is author ofIn DeWitts Footsteps: Seeing History on the Erie Canaland is on Twitter:@kennethsilber

See the rest here:

Physics, Math, and Culture Wars - Splice Today

Posted in Quantum Physics | Comments Off on Physics, Math, and Culture Wars – Splice Today

They discover how the brain has assimilated quantum concepts – Central Valley Business Journal

Posted: at 1:44 pm

11/07/2021 at 10:30 CET

An investigation from Carnegie Mellon University has investigated something surprising: how the human brain has managed to manage the most advanced concepts of modern physics, which span the subatomic, quantum and cosmological realms.

Until the emergence of quantum physics and the most advanced discoveries about the universe, the brain of scientists managed comprehensible and measurable concepts, but those borders were blown up throughout the 20th century.

These scientific developments have not only been transcendental to give us an idea of the complexity of the world and the universe, but also revolutionized the traditional way of understanding matter and energy, with concepts such as wave-particle duality or dark matter, among others. Many.

Related topic: The brain knows how to anticipate the future and simplify the complexity of life

Concept adaptationThe new research has unraveled how the human brain has adapted to this conceptual revolution emanating from scientific knowledge.

Far from trying to locate where the brain stores all this complex information, the new research found out how the brain organizes highly abstract scientific concepts that are incomprehensible to ordinary logic.

Robert Mason and Marcel Just investigated the thought processes of their fellow physics faculty on advanced physics concepts, recording their brain activity using functional magnetic resonance imaging (fMRI).

They examined the activation patterns generated by new scientific concepts and found that physicists brains, when they encounter something disruptive, separate it from what it can take on.

Separation of conceptsThat is to say, the concepts that it can measure, it places them in one region of the brain, and those that it cannot measure or understand, it separates them in another neural space.

Another surprise found in this research was the great degree of agreement between physicists in the way their brains represented the disruptive concepts: although they were formed in different universities, languages and cultures, there were similarities in the brain representations.

This similarity in conceptual representations arises because the brain system that automatically comes into play to process a given type of information is the one that is intrinsically best suited to that processing, the researchers explain in a statement.

The researchers highlight in this regard: the same brain regions are activated in all scientists when processing a new concept.

Predictable patternsAnother significant result of this research refers to the fact that it is possible to predict which neuronal pattern is activated by certain disruptive scientific concepts.

With the data collected in this research, the scientists developed a mathematical model that can accurately predict the brain activation pattern of a new concept, for example, dark matter, with an accuracy of 70%.

This result, the researchers point out, indicates that it is possible to understand the brain organization underlying complex concepts, which is manifested even visually.

A brain scan developed in the course of this research allows us to appreciate the spots that the brain uses to identify the dimensions underlying complex concepts in physics, as well as the neuronal zone activated when it analyzes dark matter.

How did it happenThe new research also explains how the neural process has developed that has allowed the feat of assimilating concepts that had not emerged from perceptual experience.

The researchers explain in this regard that the neurons of the human brain have a large number of computational capabilities with various characteristics, and that experience determines which of these capabilities are used in various possible ways, in combination with other brain regions, to perform tasks of particular thoughts.

The genius of civilization has been to use these brain capacities to develop new skills and knowledge that were not anticipated in the previous worldview of the world, the researchers note.

What made all of this possible is brain adaptability: the scientific advances in physics that have marked human history in the last century were built on the new capabilities of human thought.

This realization is projected into the future: the secret of teaching new tricks to ancient brains, as the advancement of civilization has repeatedly done, is train creative thinkers to develop new knowledge and inventions, by building or reusing the inherent information processing capabilities of the human brain.

Most powerful entityThose new insights and inventions will be explained to others, whose brains will root them in the same information-processing capabilities that were used by the brains of the original developers.

In this way, mass communication and education can spread advances to entire populations, which will have the same capacities as the brains of scientists to take on new knowledge, even if it is disruptive.

That means that the progress of science, technology and civilization continues to be driven by the most powerful entity on Earth, the human brain, the scientists conclude.

ReferenceThe neuroscience of advanced scientific concepts. Robert A. Mason, et al.l. NPJ Science of Learning, volume 6, Article number: 29 (2021). DOI: https: //doi.org/10.1038/s41539-021-00107-6

Top photo: Josh Gordon, Unplash.

Originally posted here:

They discover how the brain has assimilated quantum concepts - Central Valley Business Journal

Posted in Quantum Physics | Comments Off on They discover how the brain has assimilated quantum concepts – Central Valley Business Journal

Ask Ethan: Would a false vacuum state of the Universe lead to our destruction? – Big Think

Posted: at 1:44 pm

One of the great existential worries that plagues the minds of theoretical physicists is that the vacuum of space might not be in its true vacuum state, but could instead reside in a false vacuum instead. If you were to remove everything you could imagine from a large region of space, including:

you would be left with purely empty space, or as close as we can come to a physical definition of nothing. You might expect that if you were to draw an imaginary box around this region of nothing and measured the total amount of energy inside, you would find that it was precisely zero. But thats not what we find; we find that there actually is a positive, non-zero amount of energy inherent to space itself, even if we remove all the identifiable quantum and classical sources of matter and energy. What does this mean for the nature of the quantum vacuum, and in particular for the distinction between true vacuum and false vacuum? Thats what Eric Mars wants to know, asking:

Could you please explain what false vacuum and true vacuum mean and its implications in the existence of the universe.

Its a great question, and it requires that we start with the idea specifically for physics of zero.

In mathematics, zero is simply a number, signifying the absence of either a positive or negative amount of any quantity. In physics, however, there is another way to define zero: the zero-point energy of a system, or the lowest possible energy state that it can achieve while still remaining the same system we were initially talking about. For any physical system we can dream up, there will be at least one configuration for that system that has the lowest total amount of energy in it. For any physical system you can imagine, there is always at least one lowest-energy configuration.

That lowest-energy configuration is known as the zero-point energy of a system. It would make sense and for many of us, we would simply intuit that its so if the zero-point energy of any system were defined as zero. But that is not quite how it works.

Take the hydrogen atom, for example: a single electron orbiting a single proton. If you think classically, you would imagine that the electron could orbit that proton at any radius at all, from a large one down to a small one. Just as a planet can orbit a star at any distance, based on their mutual masses and relative speeds, you would think that a negatively charged electron could orbit a positively charged proton at any distance as well, based simply on the speed of the orbit and the balance of kinetic and potential energy.

But this ignores an extraordinarily important property of nature: the fact that the Universe is fundamentally quantum mechanical, and that the only allowable energy levels for an electron orbiting a proton are quantized. As a result, there is a lowest possible energy state that a physical system such as this can have, and that does not correspond to the electron sitting at rest directly atop the proton (that is, the lowest imaginable energy state). Instead, there is a lowest-energy state that is physically allowable, which corresponds to the electron orbiting the proton in the n=1 energy state.

Even if you cool your system down to absolute zero, there will still be this finite, non-zero energy that your system will have.

This idea, of a zero-point energy to any quantum mechanical system, goes all the way back to Max Planck in 1911 and was extended to fields by Einstein and his collaborator, Otto Stern (the same Stern who formulated the infamous Stern-Gerlach experiment), and a paper they wrote back in 1913. If we fast-forward to today, more than 100 years later, we now understand that our Universe is governed by a combination of General Relativity, our law of gravitation, and quantum field theory, which describes the other three fundamental forces.

The idea of a zero-point energy to the fabric of space itself shows up in both General Relativity and quantum field theory, but it comes about in vastly different ways. In General Relativity, the curvature of space is what determines the future motion of matter and energy through the Universe, while the presence and distribution and motion of matter and energy in turn determines the curvature of space. Matter and energy tell spacetime how to curve, and that curved spacetime tells matter and energy how to move.

Almost.

Why is this only almost true? Because, as anyone who has ever performed an indefinite integral (from calculus) will recall, you are free to add a constant to your answer: the dreaded plus c.

In General Relativity, this constant comes into play as a cosmological constant, and it can take on any positive or negative value that we like. When Einstein wanted to construct a static Universe, he threw in a positive constant to keep his toy model of the Universe one where masses were evenly distributed infinitely throughout space from collapsing; the cosmological constant would counteract gravitational attraction. There was no reason for this constant to have the positive, non-zero value that he assigned to it. He simply asserted it must be so, otherwise the Universe could not be static. With the discovery of the expanding Universe, the constant was no longer needed, and was discarded for more than 60 years.

On the other hand, there is quantum field theory, too. Quantum field theory encourages you to imagine all the ways that particles can interact with one another, including via the creation/annihilation of particle-antiparticle pairs as intermediate steps, radiative corrections, and any other sets of interactions that arnt forbidden by the laws of quantum physics. It then goes a step farther, however, which most people may not recognize. It says that in addition to these interacting fields in the presence of matter and energy, there are vacuum contributions, which represent how quantum fields in the vacuum of space, with no particles present at all, behave.

Now, heres where things get uncomfortable: we do not know how to calculate the zero-point energy of space from these quantum field theory methods, either. Each individual channel that we know how to calculate can contribute to this zero-point energy, and the way we find an individual contribution is to calculate what we call its vacuum expectation value. The problem is each such channel has an enormous vacuum expectation value: more than 100 orders of magnitude too large to be possible. Some channels have positive contributions and others have negative contributions.

Being unable to make a sensible calculation, we made an ignorant assumption: that all of the contributions would cancel out, summing to zero, and that the zero-point energy of space would, in fact, be precisely equal to zero.

Then, in the 1990s, something changed again. Observations of the Universe began to indicate that there was something causing the Universes expansion to accelerate, and that thing, whatever it is, was consistent not with any form of matter or radiation, but rather with a positive, non-zero amount of zero-point energy to the fabric of space itself. We had just measured the value of the vacuum energy inherent to space, and it was very small, but very importantly, greater than zero.

This opened up a slew of questions.

Why would we worry about the last one? Because the most important property of the vacuum of space isnt what the precise value of the zero-point energy is; rather, its vital to our Universes stability that the vacuum of space has a zero-point energy that doesnt change. And just as a hydrogen atom in any excited state will have the capability of transitioning to a lower-energy state on its way down to the zero-point state, a Universe in a false vacuum will remain capable of transitioning to a true vacuum (or a lower-energy but still false vacuum) state.

You can think of this the same way you would think about starting a ball atop a mountain and allowing it to roll down and down, and down, and down some more until it finally came to rest. If your mountainside is smooth, you can imagine that you would easily roll all the way down into the lowest part of the valley beneath the mountain, where it would settle. Thats a true vacuum state: the lowest-energy state there is, where its not physically possible to transition to a lower-energy state. In a true vacuum, youre already as low as you can go.

But if your mountainside is craggy, with pits, divots, moguls, and glacial lakes, you can imagine that your ball might come to rest somewhere other than the lowest possible point. Any other place it can remain for an indefinite period of time is not the true minimum but rather a false one. If we are talking about the vacuum state of the Universe, that means anything other than the lowest possible state is a false vacuum state.

Given that we have a positive, nonzero value for the cosmological constant in our Universe, its certainly possible that we live in a false vacuum state, and that the true vacuum, whatever it may be, exists at some other, lower-energy state.

Now, it might also not be the case; we may be in the true vacuum state. If so, there is no possibility of transitioning to a lower-energy state, and here we will remain for the remainder of our Universes existence.

But what if we live in a false vacuum state? Well, in a quantum Universe, no matter how large the distance is between a false and true minimum, how high the barrier is separating the false and true minimum, or how quickly or slowly the quantum mechanical wavefunction describing your state spreads out, there is always a finite, greater-than-zero probability of quantum tunneling from the higher-energy to the lower-energy state.

This is usually referred to as the vacuum catastrophe, because if we do quantum tunnel to a lower energy state, we have no reason to believe that the laws and/or constants that govern the Universe will remain unchanged. Wherever this vacuum decay occurs, things like atoms, planets, stars, and yes, human beings, will all be destroyed. This bubble of destruction will propagate outward at the speed of light, which means if it occurs, right now, anywhere within about 18 billion light-years of us, we will eventually be destroyed by it. This may even be suggested by our best measurements of the properties of the fundamental particles, which indicates that the electroweak force, one of the fundamental forces of nature, may be inherently metastable.

Its a grim thought, especially because we would never see it coming. One day, we would simply awaken to this wave of destruction that comes upon us at the speed of light, and then we would all be gone. In some ways, it is the most painless way to go that we can imagine, but it is also one of the saddest. Our cosmic legacy of all that ever was, is, or will be would instantaneously come to an end. All of the work that 13.8 billion years of cosmic evolution has done to create a Universe teeming with the ingredients for life, and possibly countless realizations of it, would be forever wiped out.

And yet, it is possible that something similar to this has already occurred: with the end of cosmic inflation and the onset of the hot Big Bang. A transition from a presumably very, very high energy vacuum state to a much lower-energy one, albeit a fundamentally different type of transition from quantum tunneling, is what brought inflation to an end and filled our Universe with matter and radiation some 13.8 billion years ago. Nevertheless, the possibility that we live in a false vacuum should remind us of how fleeting and fragile, and dependent upon the stability of the laws of physics, everything in our Universe is. If we live in a false vacuum state, and we could, every moment of existence could be our last.

Send in your Ask Ethan questions to startswithabang at gmail dot com!

Read this article:

Ask Ethan: Would a false vacuum state of the Universe lead to our destruction? - Big Think

Posted in Quantum Physics | Comments Off on Ask Ethan: Would a false vacuum state of the Universe lead to our destruction? – Big Think

Does modern cosmology prove the existence of God? – Big Think

Posted: at 1:44 pm

We know that everything in the Universe, as it exists today, arose from some pre-existing state that was different from how it is at present. Billions of years ago, there were no humans and no planet Earth, as our solar system, along with the ingredients necessary for life, first needed to form. The atoms and molecules essential to Earth also needed a cosmic origin: from the lives and deaths of stars, stellar corpses, and their constituent particles. The very stars themselves needed to form from the primeval atoms left over from the Big Bang. At every step, as we trace our cosmic history back farther and farther, we find that everything that exists or existed had a cause that brought about its existence.

Can we apply this logical structure to the Universe itself? Since the late 1970s, philosophers and religious scholars along with a few scientists who also dabble in those arenas have asserted that we can. Known as the Kalam cosmological argument, it asserts that

So what, then, is the cause of the Universes existence? The answer must be God. Thats the crux of the argument that modern cosmology proves the existence of God. But how well do the premises hold up to scientific scrutiny? Has science proved them, or are other options possible or even likely? The answer lies neither in logic nor theological philosophy, but in our scientific knowledge of the Universe itself.

If you think about it rationally, it makes intuitive sense that something cannot come from nothing. After all, the idea that anything can come from nothing sounds absurd; if it could, it would completely undercut the notion of cause and effect that we so thoroughly experience in our day-to-day lives. The idea of creation ex nihilo, or from nothing, violates our very ideas of common sense.

But our day-to-day experiences are not the sum total of all that there is to the Universe. There are plenty of physical, measurable phenomena that do appear to violate these notions of cause and effect, with the most famous examples occurring in the quantum Universe. As a simple example, we can look at a single radioactive atom. If you had a large number of these atoms, you could predict how much time would need to pass for half of them to decay: thats the definition of a half-life. For any single atom, however, if you ask, When will this atom decay? or, What will cause this atom to finally decay? there is no cause-and-effect answer.

There are ways you can force an atom to split apart: you can get the same effect with a cause. If you were to fire a particle at the atomic nucleus in question, for example, you could trigger its splitting apart and releasing energy. But radioactive decay forces us to reckon with this uncomfortable fact:

The same effect that we can achieve with an instigating cause can also be achieved, naturally, without any such instigating cause at all.

In other words, there is no cause for the phenomenon of when this atom will decay. It is as though the Universe has some sort of random, acausal nature to it that renders certain phenomena fundamentally indeterminate and unknowable. In fact, there are many other quantum phenomena that display this same type of randomness, including entangled spins, the rest masses of unstable particles, the position of a particle thats passed through a double slit, and so on. In fact, there are many interpretations of quantum mechanics paramount among them the Copenhagen Interpretation where acausality is a central feature, not a bug, of nature.

You might argue, and some do, that the Copenhagen Interpretation isnt the only way to make sense of the Universe and that there are other interpretations of quantum mechanics that are completely deterministic. While this is true, its also not a compelling argument; the viable interpretations of quantum mechanics are all observationally indistinguishable from one another, meaning they all have an equal claim to validity.

There are also many phenomena in the Universe that cannot be explained without ideas like:

We see evidence of this in deep inelastic scattering experiments that probe the internal structure of protons; we predict that it needs to occur in order to explain black hole decay and Hawking radiation. To assert that whatever begins to exist must have a cause ignores the many, many examples from our quantum reality where to put it generously such a statement has not been robustly established. It may be possible that this is the case, but it is anything but certain.

This one is, believe it or not, even more dubious than the prior assertion. Whereas we can imagine that there is some fundamentally deterministic, non-random, cause-and-effect reality underlying what we observe as the bizarre and counterintuitive quantum world, it is very difficult to conclude that the Universe itself must have begun to exist at some point.

But what about the Big Bang?

Thats what they all say, right? Isnt it true that our Universe began with a hot Big Bang some 13.8 billion years ago?

Kind of. Yes, it is definitely true that we can trace the history of our Universe back to an early, hot, dense, uniform, rapidly expanding state. It is true that we call that state the hot Big Bang. But whats not true, and has been known to be not true for some 40+ years, is the notion that the Big Bang is the beginning of space, time, energy, the laws of physics, and everything that we know and experience. The Big Bang wasnt the beginning but was rather preceded by a completely different state known as cosmic inflation.

There is an overwhelming set of evidence for this, which includes:

Cosmic inflation corresponds to a phase of the Universe where it was not filled with matter and radiation, but rather it had a large, positive energy inherent to the fabric of space itself. Instead of getting less dense as the Universe expands, an inflating Universe maintains a constant energy density for as long as inflation persists. That means instead of expanding and cooling and slowing in its expansion, which the Universe has been doing since the start of the hot Big Bang, the Universe was, prior to that, expanding exponentially: rapidly, relentlessly, and at an unchanging rate.

This represents a tremendous change to our picture of what the beginning of things looked like. Whereas a Universe filled with matter or radiation will lead back to a singularity, an inflating spacetime cannot. Not just may not but cannot lead to a singularity. Remember, fundamentally, what it means to be an exponential in mathematics: after a certain amount of time, whatever you have will double. Then, when that same amount of time passes again, it doubles again, and so on and so on, without bound.

That same logic can be applied to the past: that same amount of time ago, whatever we had was half of what we had now. Take another, equivalent timestep backward, and it is halved once again. But no matter how many times you halve and halve and halve whatever you had initially, it will never reach zero. Thats what inflation teaches us: our Universe, for as long as inflation went on, can only get smaller but can never reach a size of zero or a time that can be identified as the beginning.

In the context of General Relativity and theoretical physics, we say that this means the Universe is past-timelike incomplete.

Unfortunately for us, in scientific terms, we can only measure and observe what the Universe gives us as measurable and observable quantities. For all the successes of cosmic inflation, it does something that we can only consider unfortunate: by its nature, it wipes out any information from the Universe that existed prior to inflation. Not only that, but it eliminates any such information that arose prior to the final tiny fraction-of-a-second just before the end of inflation, which preceded and set up the hot Big Bang. To assert that the Universe began to exist is completely unsupported, both observationally and theoretically.

Its true that, about 20 years ago, there was a theorem published the Borde-Guth-Vilenkin theorem that demonstrated that a Universe that always expands cannot have done so infinitely to the past. (Its another way of expressing past-timelike incompleteness.) However, there is nothing that demands that the inflating Universe be preceded by a phase that was also expanding. There are numerous loopholes in this theorem as well: if you reverse the arrow of time, the theorem fails; if you replace the law of gravity with a specific set of quantum gravitational phenomena, the theorem fails; if you construct an eternally inflating steady-state Universe, the theorem fails.

Again, as before, a Universe that came into existence from non-existence is a possibility, but it is neither proven nor does it negate the other viable possibilities.

By now, we have certainly established that the first two premises of the Kalam cosmological argument are, at best, unproven. If we assume that they are, nevertheless, true, does that establish that God is the cause of our Universes existence? That is only defensible if you define God as that which caused the Universe to come into existence from a state of non-existence. Here are some examples that show why this is absurd.

Although there would likely be some who argue in the affirmative, that hardly sounds like the all-powerful, omniscient, omnipotent being that we normally envision when we talk about God. If the first two premises are true, and they have not been established or proven to be true, then all we can say is that the Universe has a cause; not that that cause is God.

The most important takeaway, however, is this: in any scientific endeavor, you absolutely cannot begin from the conclusion you hope to reach and work backward from there. That is antithetical to any knowledge-seeking enterprise to assume the answer ahead of time. You have to formulate your assertions in such a way that they can be scrutinized, tested, and either validated or falsified. In particular, you cannot posit an unprovable assertion and then claim you have proved the existence of something by deductive reasoning. If you cannot prove the premise, all logical reasoning predicated upon that premise is unsubstantiated.

It remains possible that the Universe does, at all levels, obey the intuitive rule of cause-and-effect, although the possibility of a fundamentally acausal, indeterminate, random Universe remains in play (and, arguably, preferred) as well. It is possible that the Universe did have a beginning to its existence, although that has by no means been established beyond any sort of reasonable scientific doubt. And if both of those things are true, then the Universes existence would have a cause, and that cause may be (but isnt necessarily) something we can identify with God. However, possible does not equate to proof. Unless we can firmly establish many things that have yet to be demonstrated, the Kalam cosmological argument will only convince those who already agree with its unproven conclusions.

Read the original:

Does modern cosmology prove the existence of God? - Big Think

Posted in Quantum Physics | Comments Off on Does modern cosmology prove the existence of God? – Big Think

Page 49«..1020..48495051..6070..»