Ninja Sex Party’s Brian Wecht ’97 talks rings, physics and musical comedy The Williams Record – The Williams record

When Brian Wecht 97 lost his high school class ring while living in Gladden his junior year, he didnt think much of it. The ring was bulky and ornately carved, with a hefty green gemstone embedded in the center definitely not my vibe, Wecht said and he was content to forget about it.

That was until a little over two months ago, when he received an email from Assistant Director of Alumni Relations Juan Baena 07 with the subject line: Your high school ring?

A forgotten class ring had been recovered from Room 46 in Gladden, Baena explained, back in May of 1996. The ring spent the following 24 years in a custodians drawer, forgotten again, until it ended up on Baenas desk just days before campus closed this March. Tracking down the owner was a matter of a simple search in the records database, as Wechts initials and the name of his high school are engraved on the ring.

Wecht shared the exchange on Twitter, and the post quickly gained traction, receiving more than 40,000 likes. He was nonchalant about the response.

Im kind of a public figure because of this band Im in, he said. I dont mean this to come across in a jerk-y way, but its not the strangest thing in the world for my tweets to get a couple thousand likes.

Actually, Wecht was being modest: His Twitter account has almost half a million followers, and as a founding member of the musical comedy duo Ninja Sex Party, hes no stranger to public attention. In the band, Wechts stage persona is Ninja Brian, a supernatural psychopath whose silent demeanor and shadowy image contrast sharply with the colorful spandex of his flamboyant bandmate Dan Avidan, known on stage as Danny Sexbang.

Their work ranges from raucous comedy-rock, with lyrics that are raunchy but good-natured, to cover albums that display their affinity for 70s and 80s hair metal and prog rock. Avidan performs the vocals, and Wecht focuses on keyboards; for their first few albums, Wecht handled all of the instruments, but the duo now works with a backing band.

On Ninja Sex Partys first international trip, to a sketch comedy festival in Toronto, Wecht almost lost the only other ring hes ever owned: his wedding ring.

I try to stay in character in the sense that I take my wedding ring off when Im in costume, and when I got back to the green room to put back on my normal clothes, it was gone, Wecht said. A group of other performers helped him scour the green room (luckily, comedians are generally good people, he said) and Avidan eventually retrieved the ring. Ive had some close calls a couple of other times with it, Wecht said, but I havent lost it yet.

But before hed donned Ninja Brians black balaclava and perfected his steely glare before hed dreamed of starring opposite Danny Sexbang in over-the-top music videos that regularly get millions of views Wecht was a theoretical physicist.

And before that, he was a long-haired, spectacled kid from northern New Jersey who was trying to figure out what courses to take. His first-year advisor, Professor Emeritus of Mathematics Edward Burger, persuaded Wecht to try an advanced math class, and Wecht ended up carving out a path at the College as a music and math double major.

In his free time, Wecht participated in almost every instrumental ensemble in the music department: He conducted the student symphony, was in the jazz ensemble and Symphonic Winds and occasionally played saxophone in the Berkshire Symphony or joined small jazz groups. He was also in the band for Frosh Revue the only comedy group he was at all involved with and conducted and played in the pit for several Cap & Bells musicals.

Math and music were Wechts passions in the classroom, but he also found space for the first two years of the core physics curriculum, leading him to take quantum mechanics as an elective in his senior year. That class, with Professor of Physics Tiku Majumder, inspired Wecht to consider pursuing physics after graduation, setting him down what he described as a very curvy path to graduate school.

Though he was initially set to enroll in a doctoral program in music composition at Duke, Wecht canceled his plans, got a short-term job teaching in Connecticut and spent the summer studying for the physics GREs. After getting essentially a zero on his first try, he managed to eke out an adequate score, and he soon began working toward his doctorate at the University of California, San Diego, which he completed in 2004.

Following graduate school, where he concentrated on theoretical particle physics, Wecht took a series of postdoctoral positions at Harvard, MIT, the Institute for Advanced Study and the University of Michigan. While he was at MIT, he indulged his passions for music and comedy as the musical director for the Boston sketch club Improv Asylum.

My main improv gig for a while was coaching Create an hour-long musical from a title suggestion kind of stuff, he said. Its like all improv: When its done well, youre like, Oh my god, this is literal magic. And when its bad, youre like, Oh, kill me.

When he was at the Institute for Advanced Study in Princeton, N.J., Wecht took advantage of the relatively close proximity to New York City to get involved with the musical comedy scene there. He was introduced to Avidan by a mutual friend at the Upright Citizens Brigade Theatre in 2009.

We met because Dan sent me an email and was like, I have this idea for a band. Its called Ninja Sex Party. Thats everything I know about this idea so far. So, like, lets talk about it, Wecht recalled. I was like, Thats a cool name. Lets discuss.

And so Ninja Brian was born. While Wecht differs from his character in that he is not a homicidal maniac, his natural deadpan and knack for intense stares complement Danny Sexbangs rakish exuberance.

Not long after the pair began collaborating, Wecht secured a position at the Centre for Research in String Theory at Queen Mary University in London. He and his wife, Rachel Wecht, hoped it would be their final move after years of traveling for work. Around the same time, Avidan moved to Los Angeles, on the opposite end of the globe, and ended up joining the YouTube channel Game Grumps, a popular comedic gaming web series.

Suddenly, Ninja Sex Party had a soapbox YouTube provided an ideal platform for connecting with the kind of audience who would appreciate what they were trying to do, and a video-game-oriented side project, Starbomb, soon cropped up.

After a year, a baby, and a lot of soul searching, the groups popularity continued to grow, and Wecht began to sense that the band was on a trajectory where maybe it wouldnt be the dumbest idea to do this full-time.

He was faced with an agonizing decision choosing between a stable job in physics after years of drifting through academia, and the unpredictable life of a full-time comedy musician and having recently turned 40, Wecht said he was aware that the situation carried the whiff of midlife crisis.

But when he received a formal job offer from Game Grumps, he knew that he would never have such a chance again. I thought, If I dont do this now, this is going to be the thing I regret forever, he said.

When it came to quitting his job at the university, Wecht said he made one huge tactical mistake he broke the news to his colleagues on April Fools Day. After explaining to some of the older faculty what a YouTube channel was, and making clear that he was serious, he still had to admit, On paper, thats pretty damning. It really doesnt look good.

With his wifes reluctant approval, Wecht and his family moved to Los Angeles in the summer of 2015, and Ninja Sex Partys third album came out a few weeks later, peaking at No. 1 on Billboards U.S. Comedy Albums chart.

Ninja Sex Party has since released eight albums and toured around the world. In addition to live shows in which Ninja Brian often acts on stage as an agent of chaos and an arrogant jerk in order to rile up the audience the band maintains a massive YouTube presence, with 1.32 million subscribers.

Every song we write, we think about what the video is going to be, Wecht said, noting that the music videos, which have starred guests such as Stranger Things actor Finn Wolfhard, are a major way that their work gets attention.

Last summer, Wecht teamed up with songwriter and producer Jim Roach to form a childrens music group called Go Banana Go! The band was profiled last week on NPR and released their debut album, Hi-YA!, earlier this month.

Its going well, and Im grateful every day that I get to actually do this, Wecht said. Its an unusual, fun career thats easy to explain, even to little kids. I can explain to my five-year-old that I get to play music for a living and it even seems cool to her.

His daughter (known by fans as Ninja Audrey) contributed lyrics and conceptual inspiration for Pizza Feet, which is accompanied by an animated music video, and Rachel Wecht is featured on Queen of No Share.

Although he tries to stay up to date on the world of physics through contact with friends and former colleagues on social media (half of my Facebook feed is theoretical physicists) and attends the occasional lecture, Wecht said the fast-paced nature of physics research means it would be very difficult for him to jump back in. I remember taking time off when my wife had a baby and after just a few months, I was like, Wait, I dont understand whats happening in physics, he said.

Though he doesnt plan to return to the world of academia, physics will always be a part of Wechts identity. I say Im a musician and YouTuber sometimes throw comedian in there because thats a big part of it and then also a former theoretical physicist, he said. Some people will say retired theoretical physicist, which is accurate, but also makes me sound like Im 70.

He said he would be open to the idea of teaching in a more relaxed capacity, either on comedic songwriting or topics in science. In 2010, he co-founded The Story Collider, a nonprofit podcast organization aimed at blending science and storytelling, which hosted an event at the College in 2016. I would absolutely love to do something else academic, he said, but I think it would probably be a one-off, one semester, weird-topic class.

What about a Winter Study course, on comedy songwriting or storytelling in science?

I would love to come teach something at Williams, he said. The idea of spending January in Williamstown is very appealing to me.

Whether or not hell ever return to the College in a professional role, Wechts enduring appreciation for his time here shone through in his reaction to receiving the class ring, which arrived in the mail last week.

To me, its a testament to what kind of people Williams associates with, more than the actual object itself, he said, adding to his original tweet about the incident, where he wrote, What a wonderful, considerate gesture, and so typical of the kind of people I knew at @WilliamsCollege.

Go here to read the rest:

Ninja Sex Party's Brian Wecht '97 talks rings, physics and musical comedy The Williams Record - The Williams record

Physicists Criticize Stephen Wolfram’s ‘Theory of Everything’ – Scientific American

Stephen Wolfram blames himself for not changing the face of physics sooner.

I do fault myself for not having done this 20 years ago, the physicist turned software entrepreneur says. To be fair, I also fault some people in the physics community for trying to prevent it happening 20 years ago. They were successful. Back in 2002, after years of labor, Wolfram self-published A New Kind of Science, a 1,200-page magnum opus detailing the general idea that nature runs on ultrasimple computational rules. The book was an instant best seller and received glowing reviews: the New York Times called it a first-class intellectual thrill. But Wolframs arguments found few converts among scientists. Their work carried on, and he went back to running his software company Wolfram Research. And that is where things remaineduntil last month, when, accompanied by breathless press coverage (and a 448-page preprint paper), Wolfram announced a possible path to the fundamental theory of physics based on his unconventional ideas. Once again, physicists are unconvincedin no small part, they say, because existing theories do a better job than his model.

At its heart, Wolframs new approach is a computational picture of the cosmosone where the fundamental rules that the universe obeys resemble lines of computer code. This code acts on a graph, a network of points with connections between them, that grows and changes as the digital logic of the code clicks forward, one step at a time. According to Wolfram, this graph is the fundamental stuff of the universe. From the humble beginning of a small graph and a short set of rules, fabulously complex structures can rapidly appear. Even when the underlying rules for a system are extremely simple, the behavior of the system as a whole can be essentially arbitrarily rich and complex, he wrote in a blog post summarizing the idea. And this got me thinking: Could the universe work this way? Wolfram and his collaborator Jonathan Gorard, a physics Ph.D. candidate at the University of Cambridge and a consultant at Wolfram Research, found that this kind of model could reproduce some of the aspects of quantum theory and Einsteins general theory of relativity, the two fundamental pillars of modern physics.

But Wolframs models ability to incorporate currently accepted physics is not necessarily that impressive. Its this sort of infinitely flexible philosophy where, regardless of what anyone said was true about physics, they could then assert, Oh, yeah, you could graft something like that onto our model, says Scott Aaronson, a quantum computer scientist at the University of Texas at Austin.

When asked about such criticisms, Gorard agreesto a point. Were just kind of fitting things, he says. But we're only doing that so we can actually go and do a systematized search for specific rules that fit those of our universe.

Wolfram and Gorard have not yet found any computational rules meeting those requirements, however. And without those rules, they cannot make any definite, concrete new predictions that could be experimentally tested. Indeed, according to critics, Wolframs model has yet to even reproduce the most basic quantitative predictions of conventional physics. The experimental predictions of [quantum physics and general relativity] have been confirmed to many decimal placesin some cases, to a precision of one part in [10 billion], says Daniel Harlow, a physicist at the Massachusetts Institute of Technology. So far I see no indication that this could be done using the simple kinds of [computational rules] advocated by Wolfram. The successes he claims are, at best, qualitative. Further, even that qualitative success is limited: There are crucial features of modern physics missing from the model. And the parts of physics that it can qualitatively reproduce are mostly there because Wolfram and his colleagues put them in to begin with. This arrangement is akin to announcing, If we suppose that a rabbit was coming out of the hat, then remarkably, this rabbit would be coming out of the hat, Aaronson says. And then [going] on and on about how remarkable it is.

Unsurprisingly, Wolfram disagrees. He claims that his model has replicated most of fundamental physics already. From an extremely simple model, were able to reproduce special relativity, general relativity and the core results of quantum mechanics, he says, which, of course, are what have led to so many precise quantitative predictions of physics over the past century.

Even Wolframs critics acknowledge he is right about at least one thing: it is genuinely interesting that simple computational rules can lead to such complex phenomena. But, they hasten to add, that is hardly an original discovery. The idea goes back long before Wolfram, Harlow says. He cites the work of computing pioneers Alan Turing in the 1930s and John von Neumann in the 1950s, as well as that of mathematician John Conway in the early 1970s. (Conway, a professor at Princeton University, died of COVID-19 last month.) To the contrary, Wolfram insists that he was the first to discover that virtually boundless complexity could arise from simple rules in the 1980s. John von Neumann, he absolutely didnt see this, Wolfram says. John Conway, same thing.

Born in London in 1959, Wolfram was a child prodigy who studied at Eton College and the University of Oxford before earning a Ph.D. in theoretical physics at the California Institute of Technology in 1979at the age of 20. After his Ph.D., Caltech promptly hired Wolfram to work alongside his mentors, including physicist Richard Feynman. I dont know of any others in this field that have the wide range of understanding of Dr. Wolfram, Feynman wrote in a letter recommending him for the first ever round of MacArthur genius grants in 1981. He seems to have worked on everything and has some original or careful judgement on any topic. Wolfram won the grantat age 21, making him among the youngest ever to receive the awardand became a faculty member at Caltech and then a long-term member at the Institute for Advanced Study in Princeton, N.J. While at the latter, he became interested in simple computational systems and then moved to the University of Illinois in 1986 to start a research center to study the emergence of complex phenomena. In 1987 he founded Wolfram Research, and shortly after he left academia altogether. The software companys flagship product, Mathematica, is a powerful and impressive piece of mathematics software that has sold millions of copies and is today nearly ubiquitous in physics and mathematics departments worldwide.

Then, in the 1990s, Wolfram decided to go back to scientific researchbut without the support and input provided by a traditional research environment. By his own account, he sequestered himself for about a decade, putting together what would eventually become A New Kind of Science with the assistance of a small army of his employees.

Upon the release of the book, the media was ensorcelled by the romantic image of the heroic outsider returning from the wilderness to single-handedly change all of science. Wired dubbed Wolfram the man who cracked the code to everything on its cover. Wolfram has earned some bragging rights, the New York Times proclaimed. No one has contributed more seminally to this new way of thinking about the world. Yet then, as now, researchers largely ignored and derided his work. Theres a tradition of scientists approaching senility to come up with grand, improbable theories, the late physicist Freeman Dyson told Newsweek back in 2002. Wolfram is unusual in that hes doing this in his 40s.

Wolframs story is exactly the sort that many people want to hear, because it matches the familiar beats of dramatic tales from science history that they already know: the lone genius (usually white and male), laboring in obscurity and rejected by the establishment, emerges from isolation, triumphantly grasping a piece of the Truth. But that is rarelyif everhow scientific discovery actually unfolds. There are examples from the history of science that superficially fit this image: Think of Albert Einstein toiling away on relativity as an obscure Swiss patent clerk at the turn of the 20th century. Or, for a more recent example, consider mathematician Andrew Wiles working in his attic for years to prove Fermats last theorem before finally announcing his success in 1995. But portraying those discoveries as the work of a solo genius, romantic as it is, belies the real working process of science. Science is a group effort. Einstein was in close contact with researchers of his day, and Wiless work followed a path laid out by other mathematicians just a few years before he got started. Both of them were active, regular participants in the wider scientific community. And even so, they remain exceptions to the rule. Most major scientific breakthroughs are far more collaborativequantum physics, for example, was developed slowly over a quarter-century by dozens of physicists around the world.

I think the popular notion that physicists are all in search of the eureka moment in which they will discover the theory of everything is an unfortunate one, says Katie Mack, a cosmologist at North Carolina State University. We do want to find better, more complete theories. But the way we go about that is to test and refine our models, look for inconsistencies and incrementally work our way toward better, more complete models.

Most scientists would readily tell you that their discipline isand always has beena collaborative, communal process. Nobody can revolutionize a scientific field without first getting the critical appraisal and eventual validation of their peers. Today this requirement is performed through peer reviewa process Wolframs critics say he has circumvented with his announcement. Certainly theres no reason that Wolfram and his colleagues should be able to bypass formal peer review, Mack says. And they definitely have a much better chance of getting useful feedback from the physics community if they publish their results in a format we actually have the tools to deal with.

Mack is not alone in her concerns. Its hard to expect physicists to comb through hundreds of pages of a new theory out of the blue, with no buildup in the form of papers, seminars and conference presentations, says Sean Carroll, a physicist at Caltech. Personally, I feel it would be more effective to write short papers addressing specific problems with this kind of approach rather than proclaiming a breakthrough without much vetting.

So why did Wolfram announce his ideas this way? Why not go the traditional route? I don't really believe in anonymous peer review, he says. I think its corrupt. Its all a giant story of somewhat corrupt gaming, I would say. I think its sort of inevitable that happens with these very large systems. Its a pity.

So what are Wolframs goals? He says he wants the attention and feedback of the physics community. But his unconventional approachsoliciting public comments on an exceedingly long paperalmost ensures it shall remain obscure. Wolfram says he wants physicists respect. The ones consulted for this story said gaining it would require him to recognize and engage with the prior work of others in the scientific community.

And when provided with some of the responses from other physicists regarding his work, Wolfram is singularly unenthused. Im disappointed by the naivete of the questions that youre communicating, he grumbles. I deserve better.

Read more:

Physicists Criticize Stephen Wolfram's 'Theory of Everything' - Scientific American

Recent Research Answers the Future of Quantum Machine Learning on COVID-19 – Analytics Insight

We have all seen movies or read books about an apocalyptic world where humankind is fighting against a deadly pathogen, and researchers are in a race against time to find a cure for the same. But COVID-19 is not a fictional chapter, it is real, and scientists all over the world are frantically looking for patterns in data by employing powerful supercomputers with the hopes of finding a speedier breakthrough in vaccine discovery for the COVID-19.

A team of researchers from Penn State University has recently unearthed a solution that has the potential to expedite the process of discovering a novel coronavirus treatment that is by employing an innovative hybrid branch of research known as quantum machine learning. Quantum Machine Learning is the latest field that combines both machine learning and quantum physics. The team is led by Swaroop Ghosh, Joseph R., and Janice M. Monkowski Career Development Assistant Professor of Electrical Engineering and Computer Science and Engineering.

In cases where a computer science-driven approach is implemented to identify a cure, most methodologies leverage machine learning to focus on screening different compounds one at a time to see if they can find a bond with the virus main protease, or protein. And the quantum machine learning method could yield quicker results and is more economical than any current methods used for drug discovery.

According to Prof. Ghosh, discovering any new drug that can cure a disease is like finding a needle in a haystack. Further, it is an incredibly expensive, laborious, and time-consuming solution. Using the current conventional pipeline for discovering new drugs can take between five and ten years from the concept stage to being released to the market and could cost billions in the process.

He further adds, High-performance computing such as supercomputers and artificial intelligence canhelp accelerate this process by screeningbillions of chemical compounds quicklyto findrelevant drugcandidates.

This approach works when enough chemical compounds are available in the pipeline, but unfortunately, this is not true for COVID-19. This project will explorequantum machine learning to unlock new capabilities in drug discovery by generating complex compounds quickly, he explains.

The funding from the Penn State Institute for Computational and Data Sciences, coordinated through the Penn State Huck Institutes of the Life Sciences as part of their rapid-response seed funding for research across the University to address COVID-19, is supporting this work.

Ghosh and his electrical engineering doctoral students Mahabubul Alam and Abdullah Ash Saki and computer science and engineering postgraduate students Junde Li and Ling Qiu have earlier worked on developing a toolset for solving particular types of problems known as combinatorial optimization problems, using quantum computing. Drug discovery too comes under a similar category. And hence their experience in this sector has made it possible for the researchers to explore in the search for a COVID-19 treatment while using the same toolset that they had already developed.

Ghosh considers the usage of Artificial intelligence fordrug discovery to be a very new area. The biggest challenge is finding an unknown solution to the problem by using technologies thatare still evolving that is, quantum computing and quantum machine learning.Weare excited about the prospects of quantum computing in addressinga current critical issue and contributing our bit in resolving this grave challenge. he elaborates.

Based on a report by McKinsey & Partner, the field of quantum computing technology is expected to have a global market value of US$1 trillion by 2035. This exciting scope of quantum machine learning can further boost the economic value while helping the healthcare industry in defeating the COVID-19.

Visit link:

Recent Research Answers the Future of Quantum Machine Learning on COVID-19 - Analytics Insight

Is string theory worth it? – Space.com

Paul M. Sutter is an astrophysicist at SUNY Stony Brook and the Flatiron Institute, host of Ask a Spaceman and Space Radio, and author of "Your Place in the Universe." Sutter contributed this article to Space.com's Expert Voices: Op-Ed & Insights.

String theory has had a long and venerable career. Starting in the 1960s as an attempt to explain the strong nuclear force, it has now grown to become a candidate theory of everything: a single unifying framework for understanding just about all the things in and about the universe. Quantum gravity? String theory. Electron mass? String theory. Strength of the forces? String theory. Dark energy? String theory. Speed of light? String theory.

It's such a tempting, beautiful idea. But it's also been 60 years without a result, without a final theory and without predictions to test against experiment in the real universe. Should we keep hanging on to the idea?

Related: Putting string theory to the test

There's a reason that string theory has held onto the hearts and minds of so many physicists and mathematicians over the decades, and that has to do with gravity. Folding gravity into our understanding of quantum mechanics has proven fiendishly difficult not even Albert Einstein himself could figure it out. But despite all our attempts, we have not been able to craft a successful quantum description of gravity. Every time we try, the mathematics just gets tangled in knots of infinities, rending predictions impossible.

But in the 1970s, theorists discovered something remarkable. Buried inside the mathematics of string theory was a generic prediction for something called a graviton, which is the force carrier of gravity. And since string theory is, by its very construction, a quantum theory, it means that it automatically provides a quantum theory of gravity.

This is indeed quite tantalizing. It's the only theory of fundamental physics that simply includes gravity and the original string theory wasn't even trying!

And yet, decades later, nobody has been able to come up with a complete description of string theory. All we have are various approximations that we hope describe the ultimate theory (and hints of an overarching framework known as "M-theory"), but none of these approximations are capable of delivering actual predictions for what we might see in our collider experiments or out there in the universe.

Even after all these decades, and the lure of a unified theory of all of physics, string theory isn't "done."

One of the many challenges of string theory is that it predicts the existence of extra dimensions in our universe that are all knotted and curled up on themselves at extremely small scales. Suffice it to say, there are a lot of ways that these dimensions can interfold somewhere in the ballpark of 10100,000. And since the particular arrangement of the extra dimensions determines how the strings of string theory vibrate, and the way that the strings vibrate determines how they behave (leading to the variety of forces and particles in the world), only one of those almost uncountable arrangements of extra dimensions can correspond to our universe.

But which one?

Right now it's impossible to say through string theory itself we lack the sophistication and understanding to pick one of the arrangements, determine how the strings vibrate and hence the flavor of the universe corresponding to that arrangement.

Since it looks like string theory can't tell us which universe it prefers, lately some theorists have argued that maybe string theory prefers all universes, appealing to something called the landscape.

The landscape is a multiverse, representing all the 10100,000 possible arrangements of microscopic dimensions, and hence all the 10100,000 arrangements of physical reality. This is to say, universes. And we're just one amongst that almost-countless number.

So how did we end up with this one, and not one of the others? The argument from here follows something called the Anthropic Principle, reasoning that our universe is the way it is because if it were any different (with, say, a different speed of light or more mass on the electron) then life at least as we understand it would be impossible, and we wouldn't be here to be asking these big important questions.

If that seems to you as filling but unsatisfying as eating an entire bag of chips, you're not alone. An appeal to a philosophical argument as the ultimate, hard-won result of decades of work into string theory leaves many physicists feeling hollow.

Related: The history and structure of the universe (infographic)

The truth is, by and large most string theorists aren't working on the whole unification thing anymore. Instead, what's captured the interest of the community is an intriguing connection called the AdS/CFT correspondence. No, it's not a new accounting technique, but a proposed relationship between a version of string theory living in a 5-dimensional universe with a negative cosmological constant, and a 4-dimensional conformal field theory on the boundary of that universe.

The end result of all that mass of jargon is that some thorny problems in physics can be treated with the mathematics developed in the decades of investigating string theory. So while this doesn't solve any string theory problems itself, it does at least put all that machinery to useful work, lending a helping hand to investigate many problems from the riddle of black hole information to the exotic physics of quark-gluon plasmas.

And that's certainly something, assuming that the correspondence can be proven and the results based on string theory bear fruit.

But if that's all we get approximations to what we hope is out there, a landscape of universes, and a toolset to solve a few problems after decades of work on string theory, is it time to work on something else?

Learn more by listening to the episode "Is String Theory Worth It? (Part 6: We Should Probably Test This)" on the Ask A Spaceman podcast, available on iTunes and on the Web at http://www.askaspaceman.com. Thanks to John C., Zachary H., @edit_room, Matthew Y., Christopher L., Krizna W., Sayan P., Neha S., Zachary H., Joyce S., Mauricio M., @shrenicshah, Panos T., Dhruv R., Maria A., Ter B., oiSnowy, Evan T., Dan M., Jon T., @twblanchard, Aurie, Christopher M., @unplugged_wire, Giacomo S., Gully F. for the questions that led to this piece! Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and facebook.com/PaulMattSutter.

Read the original post:

Is string theory worth it? - Space.com

Physicist Brian Greene on learning to focus on the here and now – KCRW

The coronavirus pandemic is a reminder that things can change fast and unexpectedly. As much as we look for stability, things come and go, and we live and die. Theoretical physicist and mathematician Brian Greene explains why understanding the science behind the impermanence in our world can lead to a more fulfilling life.

He explains his theories with KCRWs Jonathan Bastian. This interview has been abbreviated and edited for clarity.

In your most recent book, you write about the concept of impermanence. When did that idea become apparent to you?

Brian Greene: I think at various levels of conscious awareness, we know that we are impermanent. And it hits us in different ways at different times, depending upon where we are mentally, spiritually and what's happening in the world around us.

When I was in college and seriously thinking about what I wanted to do, I had a conversation with a mentor of mine who told me he does mathematics because once you prove a theorem in mathematics, it's true forever, it will never not be true.

That just hit me. It was a powerful moment when I recognized that you can't say that about many things in the world. And that's when I started to really think about whats available in this life that does transcend our own impermanence.

How do you then arrive at the concept of impermanence?

There is this sensibility that if you can uncover the deep laws of the universe, you are touching something that was always true. One of the things I do in the book is explore the degree to which that is actually true. Does a law of physics, does quantum mechanics have any meaning or value or purpose in the absence of human beings, or in the absence of another life form that can contemplate it? What does a deep equation mean if there isn't any conscious awareness to contemplate it?

In the far future, as I argue in the book, it's quite likely there won't be any life forms. And without lifeforms to contemplate Einsteins equations, his theory of relativity, it's hard for me to see that they have any standing in terms of the permanence that we as living creatures aspire to.

How did you come to grips with this? Did you have some kind of existential awakening?

I definitely went through a dark stance from immersing myself in the idea that you are transcending human impermanence, whether it's quantum mechanics or relativity or what have you. That was how I lived my life for many decades. And then to recognize that that perspective is probably not right, that was a shift.

But then I had this other moment in, of all places, a Starbucks. A shift that happened inside of me, where I felt like a change in perspective from grasping for an ephemeral future to just focusing on the here and now.

...Do what we've heard from mindfulness teachers and sages and philosophers across the ages to focus on the here and now, as that is the only place in which value and meaning can actually have an anchor.

See the original post:

Physicist Brian Greene on learning to focus on the here and now - KCRW

Finding the right quantum materials – MIT News

The Gordon and Betty Moore Foundation has awarded MIT Associate Professor of Physics Joseph G. Checkelsky a $1.7 million Emergent Phenomena in Quantum Systems (EPiQS) Initiative grant to pursue his search for new crystalline materials, known as quantum materials, capable of hosting exotic new quantum phenomena.

Quantum materials have the potential to transform current technologies by supporting new types of electronic and magnetic behavior, including dissipationless transmission of electricity and topological protection of information. Designing and synthesizing robust quantum materials is a key goal of modern-day physics, chemistry, and materials science.

However, this task does not have a straightforward recipe, particularly as many of the most exciting quantum systems are also the most complex. The starting point can be viewed as the periodic table of the elements and the geometrically allowed ways to arrange them in a solid. The path from there to a new quantum material can be circuitous, to say the least, Checkelsky says.

In our group we are trying to come up with new methods to find our way to these new quantum systems, he says. This usually requires a fresh perspective on crystalline motifs.

One example of these unique electronic structures is the kagome crystal lattice formed when atoms of iron (Fe) and tin (Sn) combine into a pattern that looks like a Japanese kagome basket, with a repeating pattern of corner-sharing triangles. Checkelsky, together with Class of 1947 Career Development Assistant Professor of Physics Riccardo Comin, graduate students Linda Ye and Min Gu Kang, and their colleagues reported in 2018 that a compound with a 3-to-2 ratio of iron to tin (Fe3Sn2) generates Dirac fermions a special kind of electronic state supporting exotic electronic behavior protected by the topology, or geometric structure, of atoms within the material.

More recently, the MIT team and colleagues elsewherereportedinNature Materials that, in a 1-to-1 iron-tin compound, the symmetry of the kagome lattice is special, simultaneously hosting both infinitely light massless particles (the Dirac fermions) and infinitely heavy particles (which manifest experimentally as flat bands in the electronic structure of the material). These unique electronic structures in iron-tin compounds could be the basis for new topological phases and spintronic devices.

For many years, the idea that a metal with atoms arranged in a kagome lattice of corner-sharing triangles could support unusual electronic states, such as combining both massless and infinitely massive electrons, remained a textbook problem something that could be solved with equations but had not been experimentally shown in a real material. It was, Checkelsky notes, thought of as a toy model, something so simplified that it might seem unrealistic that a real lattice would do that. But something about it being so simple helps you cut to the heart of the most interesting physics, he says. By doing our best to force this into an actual crystal, we managed to bridge that gap from the abstract to the real in a quantum material.

To try to find new quantum materials is a challenge, Checkelsky says. Typically for our group, we think about different kinds of lattices that might support these interesting states. The generous support of the Gordon and Betty Moore Foundation will help us pursue new methods to stabilize these materials beyond conventional approaches giving us a chance to find exciting new materials.

It is also an opportunity to train people how to find new quantum materials, he says. This is a process that takes time, but is an important skill in the field of quantum materials and one to which I hope we can contribute.

Last year, Checkelsky led an international team to discover a new type of magnetically driven electrical response in a crystal composed of cerium, aluminum, germanium, and silicon. The researchers call this responsesingular angular magnetoresistance(SAMR).

Like an old-fashioned clock that chimes at 12 oclock and at no other position of the hands, the newly discovered magnetoresistance only occurs when the direction, or vector, of the magnetic field is pointed straight in line with the high-symmetry axis in the materials crystal structure. Turn the magnetic field more than a degree away from that axis and the resistance drops precipitously. Theseresultswere reported in the journalScience.

This unique effect, which can be attributed to the ordering of the cerium atoms magnetic moments, occurs at temperatures below 5.6 kelvins (-449.6 degrees Fahrenheit). It differs strongly from the response of typical electronic materials, in which electrical resistance and voltage usually vary smoothly as an applied magnetic field is rotated across the material.

In July 2019, Checkelsky won a Presidential Early Career Award for Scientists and Engineers (PECASE), the highest honor bestowed by the U.S. government to science and engineering professionals in the early stages of their independent research careers.

TheGordon and Betty Moore Foundationfosters pathbreaking scientific discovery, environmental conservation, patient-care improvements, and preservation of the special character of the San Francisco Bay Area. Checkelskys Moore Foundation EPiQS Initiative Grant No. GBMF9070 is administered by the Materials Research Laboratory. The Materials Research Laboratory serves interdisciplinary groups of MIT faculty, staff, and students supported by industry, foundations, and government agencies to carry out fundamental engineering research on materials. Research topics include energy conversion and storage, quantum materials, spintronics, photonics, metals, integrated microsystems, materials sustainability, solid-state ionics, complex oxide electronic properties, biogels, and functional fibers.

Continued here:

Finding the right quantum materials - MIT News

Could quantum machine learning hold the key to treating COVID-19? – Tech Wire Asia

Sundar Pichai, CEO of Alphabet with one of Googles quantum computers. Source: AFP PHOTO / GOOGLE/HANDOUT

Scientific researchers are hard at work around the planet, feverishly crunching data using the worlds most powerful supercomputers in the hopes of a speedier breakthrough in finding a vaccine for the novel coronavirus.

Researchers at Penn State University think that they have hit upon a solution that could greatly accelerate the process of discovering a COVID-19 treatment, employing an innovative hybrid branch of research known as quantum machine learning.

When it comes to a computer science-driven approach to identifying a cure, most methodologies harness machine learning to screen different compounds one at a time to see if they might bond with the virus main protease, or protein.

This process is arduous and time-consuming, despite the fact that the most powerful computers were actually condensing years (maybe decades) of drug testing into less than two years time. Discovering any new drug that can cure a disease is like finding a needle in a haystack, said lead researcher Swaroop Ghosh, the Joseph R. and Janice M. Monkowski Career Development Assistant Professor of Electrical Engineering and Computer Science and Engineering at Penn State.

It is also incredibly expensive. Ghosh says the current pipeline for discovering new drugs can take between five and ten years from the concept stage to being released to the market, and could cost billions in the process.

High-performance computing such as supercomputers and artificial intelligence (AI) canhelp accelerate this process by screeningbillions of chemical compounds quicklyto findrelevant drugcandidates, he elaborated.

This approach works when enough chemical compounds are available in the pipeline, but unfortunately this is not true for COVID-19. This project will explorequantum machine learning to unlock new capabilities in drug discovery by generating complex compounds quickly.

Quantum machine learning is an emerging field that combines elements of machine learning with quantum physics. Ghosh and his doctoral students had in the past developed a toolset for solving a specific set of problems known as combinatorial optimization problems, using quantum computing.

Drug discovery computation aligns with combinatorial optimization problems, allowing the researchers to tap the same toolset in the hopes of speeding up the process of discovering a cure, in a more cost-effective fashion.

Artificial intelligence for drug discovery is a very new area, Ghosh said. The biggest challenge is finding an unknown solution to the problem by using technologies that are still evolving that is, quantum computing and quantum machine learning. We are excited about the prospects of quantum computing in addressing a current critical issue and contributing our bit in resolving this grave challenge.

Joe Devanesan | @thecrystalcrown

Joe's interest in tech began when, as a child, he first saw footage of the Apollo space missions. He still holds out hope to either see the first man on the moon, or Jetsons-style flying cars in his lifetime.

Go here to see the original:

Could quantum machine learning hold the key to treating COVID-19? - Tech Wire Asia

Cliff’s Edge — The Past Hypothesis – Adventist Review

May 9, 2020

CLIFFORD GOLDSTEIN

For decades I have been reading popularized books on quantum physics, relativity (special and general), and cosmology by young men brilliant enough to get doctoral degrees in mathematical physics or theoretical physics or theoretical mathematical physics or whatever, and also to write accessible books that sell in numbers I drool over.

However, as the years roll by (or whatever their physics teaches that time does), its finally dawning on these wunderkinds what the philosophical premises of their science mean for them, their families, their lifes work. After all, according to these premises, the universe that they have so deeply studied is (depending on the math in their equations) either going to tear apart, collapse in on itself, or just flat out burn out.

Enough to make even these demigods wonder, Whats it all about? Or if its about anything at all? Or is it all just as meaningless as their premises imply?

Take, for example, Brian Greene, a professor of physics and mathematics at Columbia University and renowned for groundbreaking discoveries in string theory. Greene has also authored such bestsellers as The Elegant Universe (1999) The Fabric of the Cosmos (2004), The Hidden Reality (2011), and his latest, Until the End of Time: Mind, Matter, and our Search for Meaning in an Evolving Universe (2020).

A plug for Until the End of Time says that through a series of nested stories that explain distinct but interwoven layers of realityfrom quantum mechanics to consciousness to black holesGreene provides us with a clearer sense of how we came to be, a finer picture of where we are now, and a firmer understanding of where we are headed.

Really?

Sure, Brian Greene has his conjectures, his speculations, some no doubt greatly influenced by his unchallenged expertise in mathematical physics. But thats all that they are, speculations and conjectures, which are also (Im afraid) exceedingly limited by his unproven philosophical claim that without intent or design, without forethought or judgment, without planning or deliberation, the cosmos yields meticulously ordered configurations of particles from atoms to stars to life.

How this happened, of course, is the big question; what it all means, the bigger one. Nevertheless, he claims that entropy and gravity together are at the heart of how a universe heading toward ever-greater disorder can nevertheless yield and support ordered structures like stars, planets, and people. He writes that by the grace of random chance, funneled through natures laws, that is, through gravity and entropythe universe, life, human consciousness all came into existence. (Gracethats the word he used!)

Everyones familiar with gravity, and with entropy, too, though it needs a bit of explaining. Entropy is a statistical principle that describes why cars rust, why our bodies fall apart, and why all things, if left alone, move toward disorder. (Dont put thought or energy into keeping up your abode, and see what happens to it.) Entropy (also known as the Second Law of Thermodynamics) is the measure of that disorder: low entropy, order; high entropy, disorder, and our universe is moving, inexorably, toward higher entropy, higher disorder.

To use an image that Greene uses, imagine 100 pennies all heads up on a table. By comparison he writes, if we consider even a slightly different outcome, say in which we have a single tail (and the other 99 pennies are still all heads), there are a hundred different ways this can happen: the lone tail could be the first coin, or it could be the second coin, or the third, and so on up to the hundredth coin. Getting 99 heads is thus a hundred times easiera hundred times more likelythan getting all heads.

If you keep going, the ways of getting more tails amid heads keep rising. There are 4,950 ways to get two tails; 161,700 ways to three tails; 4,000,004 ways for four tails, and so forth until the numbers peak at 50 heads and 50 tails. Green writes that at this point, there are about a hundred billion billion billion possible combinations (well, 100, 891, 344, 545, 564, 193, 334, 812, 497, 256 combinations).

Now, lets move from coins to atoms, the stuff of existence (at least as stuff appears to us when we look at it). A bunch of random atoms are much more likely to remain a bunch of random atoms than to form, say, a cat or a copy of The Iliad, just as 100 random coins on a table are more likely to be in disarray than to be all heads (or tails) up, or even to get real close to either configuration. Things go from order to disorder simply because there are a whole lot more ways to be disordered than ordered.

Fine, but how does this law-like tendency for all things toward disorder, toward higher entropy, lead to all the ordered and organized structures that exist, everything from stars to human consciousness? Greene answers: its gravity. When theres enough gravityenough sufficiently concentrated stuffordered structures can form, he claims, then he spends a hunk of his book explaining how it happened.

How successfully Greene make his case, readers of Until the End of Time can decide for themselves. I want, instead, to look at something he wrote about entropy that, I humbly suggest, presents a major flaw in his thinking. Its whats known as The Past Hypothesis.

Lets go back to the 100 coins on the table, but now in a high entropy state, a state of high disorder. Suppose, as you were studying why the coins were like that, you developed a theory which required that at first these coins were in a low entropy state, all heads up, say. Fine. But this leaves open the simple question: How did they get that way? The answers obvious: some intelligence deliberately arranged the coins into that low-entropy state. How else?

But suppose that an unproven philosophical premise behind the science investigating the coins is that their existence, however it began, did so without intent or design, without forethought or judgment, without planning or deliberation. You, therefore, would need another explanation for this hypothetical low-entropy, highly ordered state of 100 heads up coins as an initial condition. (In fact, you probably would have never theorized an intelligence behind it because your philosophical presupposition, from the start, forbade it.)

Lets again move from coins to atoms, the atoms in our universe, which are in a high entropy state, and getting higher. The problem comes from The Past Hypothesis, which teaches that the universe started out in a state of low entropy.

A hundred pennies with all heads, writes Greene, has low entropy and yet admits an immediate explanationinstead of dumping the coins on the table, someone carefully arranged them. But what or who arranged the special low-entropy configuration of the early universe? Without a complete theory of cosmic origins, science cant provide an answer.

Who (perhaps a Freudian slip of the computer keys?) or what arranged the special low-entropy configuration of the universe? If 100 coins heads up, a fairly simple configuration no matter how unlikely, needed someone to arrange them, then what about the early conditions of our universe, which must have been much more complex than a mere 100 heads up coins? To paraphrase Greene, Who or what arranged it that way?

In a line from his book (the line that prompted this column), Greene just shrugged his shoulders at this question and said: For now, we will simply assume that one way or another, the early universe transitioned into this low-entropy, highly ordered configuration, sparking the bang and allowing us to declare that the rest is history.

One way or another the early universe just happened to be highly ordered? If, in seeking to understand the origins and nature of the 100 coins on the table, you just shrugged off their low-entropy beginnings with, Well, lets just assume that, somehow, the 100 coins all got heads up, youd be sneered at. Yet Greene does that with something astronomically more complicated than 100 heads up coins, the low-entropy state of the early universe.

Too bad Greene, echoing Galileo, Copernicus, Kepler, and Newton, cant say something like: Look, I am a scientist. I study only natural phenomena, which means that even though, obviously, some intelligence must have created the low-entropy state of the early universe, I dont deal with that but only with what comes after, or the like. Of course, even if inclined to say that, he would be derided, ridiculed, and tarred-and-feathered as the intellectual equivalent of a flat-earther or Holocaust-denier.

Theres a tragic irony, however, in not acknowledging the obvious. Until the End of Time reflects Greenes attempt to come to terms with the fact that, according to his science, every memory of him and of everything that he accomplished, along with the memory of everyone else and of everything that they accomplished, are all going to vanish into eternal oblivion as if never existing or happening to begin with. Yet he wrote about how, in a Starbucks, it hit him that when you realize the universe will be bereft of stars and planets and things that think, your regard for our era can appreciate toward reverence.

It can? For most people, every conscious moment in our era is overshadowed by the certainty thatbecause they unfold in a universe that one day will be bereft of stars and planets and things that thinkthese moments ultimately mean nothing. So how much reverence does nothing deserve? The Hebrew Scripture says that God has put olam (eternity) in our hearts (Eccl. 3:11), and as long as we can envision an olam that steamrolls every memory of us into the dirt as it moves on without us, we are left to flail about in a search for meaning amid a universe that, according to Greenes unproven presuppositions, offers none.

Its painful, because the low entropy state of the early cosmos points to the only logical past hypothesisa Creator. This Creator and His gracenot the grace of random chance, funneled through natures laws, which, after supposedly creating us, destroy us (some grace)His grace promises, for those who accept it, eternal life (John 17:3) in the same olam that the Creator has, yes, put in our hearts.

Clifford Goldstein is editor of the Adult Sabbath School Bible Study Guide. His latest book, Baptizing the Devil: Evolution and the Seduction of Christianity, is available from Pacific Press.

More here:

Cliff's Edge -- The Past Hypothesis - Adventist Review

Researchers Have Found a New Way to Convert Waste Heat Into Electricity to Power Small Devices – SciTechDaily

This diagram shows researchers how electrical energy exists in a sample of Fe3Ga. Credit: 2020 Sakai et al

A thin, iron-based generator uses waste heat to provide small amounts of power.

Researchers have found a way to convert heat energy into electricity with a nontoxic material. The material is mostly iron which is extremely cheap given its relative abundance. A generator based on this material could power small devices such as remote sensors or wearable devices. The material can be thin so it could be shaped into various forms.

Theres no such thing as a free lunch, or free energy. But if your energy demands are low enough, say for example in the case of a small sensor of some kind, then there is a way to harness heat energy to supply your power without wires or batteries. Research Associate Akito Sakai and group members from his laboratory at the University of Tokyo Institute for Solid State Physics and Department of Physics, led by Professor Satoru Nakatsuji, and from the Department of Applied Physics, led by Professor Ryotaro Arita, have taken steps towards this goal with their innovative iron-based thermoelectric material.

Thermoelectric devices based on the anomalous Nernst effect (left) and the Seebeck effect (right). (V) represents the direction of current, (T) the temperature gradient and (M) the magnetic field. Credit: 2020 Sakai et al

So far, all the study on thermoelectric generation has focused on the established but limited Seebeck effect, said Nakatsuji. In contrast, we focused on a relatively less familiar phenomenon called the anomalous Nernst effect (ANE).

ANE produces a voltage perpendicular to the direction of a temperature gradient across the surface of a suitable material. The phenomenon could help simplify the design of thermoelectric generators and enhance their conversion efficiency if the right materials become more readily available.

A diagram to show the nodal web structure responsible for the anomalous Nernst effect. Credit: 2020 Sakai et al

We made a material that is 75 percent iron and 25 percent aluminum (Fe3Al) or gallium (Fe3Ga) by a process called doping, said Sakai. This significantly boosted ANE. We saw a twentyfold jump in voltage compared to undoped samples, which was exciting to see.

This is not the first time the team has demonstrated ANE, but previous experiments used materials less readily available and more expensive than iron. The attraction of this device is partly its low-cost and nontoxic constituents, but also the fact that it can be made in a thin-film form so that it can be molded to suit various applications.

The thin and flexible structures we can now create could harvest energy more efficiently than generators based on the Seebeck effect, explained Sakai. I hope our discovery can lead to thermoelectric technologies to power wearable devices, remote sensors in inaccessible places where batteries are impractical, and more.

Before recent times this kind of development in materials science would mainly come about from repeated iterations and refinements in experiments which were both time-consuming and expensive. But the team relied heavily on computational methods for numerical calculations effectively reducing time between the initial idea and proof of success.

Numerical calculations contributed greatly to our discovery; for example, high-speed automatic calculations helped us find suitable materials to test, said Nakatsuji. And first principles calculations based on quantum mechanics shortcut the process of analyzing electronic structures we call nodal webs which are crucial for our experiments.

Up until now this kind of numerical calculation was prohibitively difficult, said Arita. So we hope that not only our materials, but our computational techniques can be useful tools for others as well. We are all keen to one day see devices based on our discovery.

###

Reference: Iron-based binary ferromagnets for transverse thermoelectric conversion by Akito Sakai, Susumu Minami, Takashi Koretsune, Taishi Chen, Tomoya Higo, Yangming Wang, Takuya Nomoto, Motoaki Hirayama, Shinji Miwa, Daisuke Nishio-Hamane, Fumiyuki Ishii, Ryotaro Arita and Satoru Nakatsuji, 27 April 2020, Nature.DOI: 10.1038/s41586-020-2230-z

This work is partially supported by CREST (JPMJCR18T3), PRESTO (JPMJPR15N5), Japan Science and Technology Agency, by Grants-in-Aids for Scientific Research on Innovative Areas (JP15H05882 and JP15H05883) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan, and by Grants-in-Aid for Scientific Research (JP16H02209, JP16H06345, JP19H00650) from the Japanese Society for the Promotion of Science (JSPS). The work for first-principles calculation was supported in part by JSPS Grant-in-Aid for Scientific Research on Innovative Areas (JP18H04481 and JP19H05825) and by MEXT as a social and scientific priority issue (Creation of new functional devices and high-performance materials to support next-generation industries) to be tackled by using post-K computer (hp180206 and hp190169).

Read this article:

Researchers Have Found a New Way to Convert Waste Heat Into Electricity to Power Small Devices - SciTechDaily

Why Self-Awareness and Communication Are Key for Self-Taught Players and Luthiers – Premier Guitar

With his signature guitar built by our columnist at the ready, Japanese artist Jinmo publicly celebrates each time he completes a deadline with a different pipe and the words, Banzai! Im free!

Its hard to believe, but this is my100th column for Premier Guitar. So, this month, Id like to allow myself to get a bit more personal and talk a little about what it means to be on this side of the desk. When I first started writing this column, it had a huge impact on my workflow by adding two additional deadlines to my already busy monthly schedule: an early one to decide on the topic for the month, and the submission deadline for PG. Im sure every colleague at PG knows the feeling of panic when searching for a subject and then collecting all the needed information with a deadline looming. I was certain I couldnt manage it for more than six months before needing a break. Well, here we are approaching nine years.

Its no secret that Im not an expert when it comes to vintage stuff, but often, historical contexts play an important role in why things have developed in a specific direction. The amount of information out there is vast, and its easy to overlook or misinterpret certain details when researching decades of developments and products. I feel pretty safe when it comes to physics, but Im also aware of the massive amount of collective expertise among PG readers regarding many topics. Luckily, I havent causedor dont know ofany remarkable shit storms so far!

Were all learning. Autodidacticism is self-learningself-taught education without the guidance of masters such as teachers and professors, or institutions like schools and universities. Interestingly, the number of autodidacts among musicians and luthiers is huge. But what does this mean for our expertise and skills?

Luckily, making and hearing music has such a high emotional value that a relatively small amount of self-taught playing skills can create rock-star fame. Similarly, simply knowing how to work with wood can result in a good instrument, but, in both cases, its more by accident than on purpose.

Its worth reminding self-learners about the dangers of knowledge gaps and the resulting risk of failing to correctly connect the dots.

Some argue that self-teaching is the ideal and only way of keeping a free mind, and that it often results in outsider art. However, self-learning can easily turn into cherry picking while quietly skipping all the difficult, unpleasant, and toilsome parts. Its worth reminding self-learners about the dangers of knowledge gaps and the resulting risk of failing to correctly connect the dots.

Its like a friend who wants to study quantum mechanics, but insists on skipping all classic physics. (As if there is any sort of real understanding in quantum mechanics anyway!) Or the one who likes to study astrophysics without the basic ballistics and equations of motion in gravity fields. Its pretty obvious that this kind of learning will end in dilettantism. As applicable to music, this is exactly what created the outsider genre, synonymous with self-taught, untrained, naive, and primitive.

Somehow, we are all doing self-teaching in certain areas of our lives, but there is a line before it becomes involuntarily comical due to a lack of self-awareness, incompetence to judge your own standing, and a lack of communication. Communicating with others is like getting your knowledge tested. A good example would be a luthier and marketing expert talking about physics and the acoustical outcome of their instruments, or me writing columns about vintage instruments.

Nobody can reach an expert level in all areas, so at least be aware of that, especially once you have professional ambitions as a musician or a luthier. Otherwise, proclamations like we use roasted maple for the neck, as the resonances are hardened in a marketing video, or there is no F# on a bass by a self-taught bassist can easily backfire.

Im here in hopes of helping to raise your knowledge about all things bass, and I look forward to continuing to do so. Thank you for your continued reading and commenting!

View post:

Why Self-Awareness and Communication Are Key for Self-Taught Players and Luthiers - Premier Guitar

What Is Quantum Mechanics? Quantum Physics Defined …

Quantum mechanics is the branch of physics relating to the very small.

It results in what may appear to be some very strange conclusions about the physical world. At the scale of atoms and electrons, many of the equations ofclassical mechanics, which describe how things move at everyday sizes and speeds, cease to be useful. In classical mechanics, objects exist in a specific place at a specific time. However, in quantum mechanics, objects instead exist in a haze of probability; they have a certain chance of being at point A, another chance of being at point B and so on.

Quantum mechanics (QM) developed over many decades, beginning as a set of controversial mathematical explanations of experiments that the math of classical mechanics could not explain. It began at the turn of the 20th century, around the same time that Albert Einstein published histheory of relativity, a separate mathematical revolution in physics that describes the motion of things at high speeds. Unlike relativity, however, the origins of QM cannot be attributed to any one scientist. Rather, multiple scientists contributed to a foundation of three revolutionary principles that gradually gained acceptance and experimental verification between 1900 and 1930. They are:

Quantized properties: Certain properties, such as position, speed and color, can sometimes only occur in specific, set amounts, much like a dial that "clicks" from number to number. This challenged a fundamental assumption of classical mechanics, which said that such properties should exist on a smooth, continuous spectrum. To describe the idea that some properties "clicked" like a dial with specific settings, scientists coined the word "quantized."

Particles of light: Light can sometimes behave as a particle. This was initially met with harsh criticism, as it ran contrary to 200 years of experiments showing that light behaved as a wave; much like ripples on the surface of a calm lake. Light behaves similarly in that it bounces off walls and bends around corners, and that the crests and troughs of the wave can add up or cancel out. Added wave crests result in brighter light, while waves that cancel out produce darkness. A light source can be thought of as a ball on a stick beingrhythmically dipped in the center of a lake. The color emitted corresponds to the distance between the crests, which is determined by the speed of the ball's rhythm.

Waves of matter: Matter can also behave as a wave. This ran counter to the roughly 30 years of experiments showing that matter (such as electrons) exists as particles.

In 1900, German physicist Max Planck sought to explain the distribution of colors emitted over the spectrum in the glow of red-hot and white-hot objects, such as light-bulb filaments. When making physical sense of the equation he had derived to describe this distribution, Planck realized it implied that combinations of only certaincolors(albeit a great number of them) were emitted, specifically those that were whole-number multiples of some base value. Somehow, colors were quantized! This was unexpected because light was understood to act as a wave, meaning that values of color should be a continuous spectrum. What could be forbiddingatomsfrom producing the colors between these whole-number multiples? This seemed so strange that Planck regarded quantization as nothing more than a mathematical trick. According to Helge Kragh in his 2000 article in Physics World magazine, "Max Planck, the Reluctant Revolutionary," "If a revolution occurred in physics in December 1900, nobody seemed to notice it. Planck was no exception "

Planck's equation also contained a number that would later become very important to future development of QM; today, it's known as "Planck's Constant."

Quantization helped to explain other mysteries of physics. In 1907, Einstein used Planck's hypothesis of quantization to explain why the temperature of a solid changed by different amounts if you put the same amount of heat into the material but changed the starting temperature.

Since the early 1800s, the science ofspectroscopyhad shown that different elements emit and absorb specific colors of light called "spectral lines." Though spectroscopy was a reliable method for determining the elements contained in objects such as distant stars, scientists were puzzled aboutwhyeach element gave off those specific lines in the first place. In 1888, Johannes Rydberg derived an equation that described the spectral lines emitted by hydrogen, though nobody could explain why the equation worked. This changed in 1913 whenNiels Bohrapplied Planck's hypothesis of quantization to Ernest Rutherford's 1911 "planetary" model of the atom, which postulated that electrons orbited the nucleus the same way that planets orbit the sun. According toPhysics 2000(a site from the University of Colorado), Bohr proposed that electrons were restricted to "special" orbits around an atom's nucleus. They could "jump" between special orbits, and the energy produced by the jump caused specific colors of light, observed as spectral lines. Though quantized properties were invented as but a mere mathematical trick, they explained so much that they became the founding principle of QM.

In 1905, Einstein published a paper, "Concerning an Heuristic Point of View Toward the Emission and Transformation of Light," in which he envisioned light traveling not as a wave, but as some manner of "energy quanta." This packet of energy, Einstein suggested, could "be absorbed or generated only as a whole," specifically when an atom "jumps" between quantized vibration rates. This would also apply, as would be shown a few years later, when an electron "jumps" between quantized orbits. Under this model, Einstein's "energy quanta" contained the energy difference of the jump; when divided by Plancks constant, that energy difference determined the color of light carried by those quanta.

With this new way to envision light, Einstein offered insights into the behavior of nine different phenomena, including the specific colors that Planck described being emitted from a light-bulb filament. It also explained how certain colors of light could eject electrons off metal surfaces, a phenomenon known as the "photoelectric effect." However, Einstein wasn't wholly justified in taking this leap, said Stephen Klassen, an associate professor of physics at the University of Winnipeg. In a 2008 paper, "The Photoelectric Effect: Rehabilitating the Story for the Physics Classroom," Klassen states that Einstein's energy quanta aren't necessary for explaining all of those nine phenomena. Certain mathematical treatments of light as a wave are still capable of describing both the specific colors that Planck described being emitted from a light-bulb filament and the photoelectric effect. Indeed, in Einstein's controversial winning of the 1921Nobel Prize, the Nobel committee only acknowledged "his discovery of the law of the photoelectric effect," which specifically did not rely on the notion of energy quanta.

Roughly two decades after Einstein's paper, the term "photon" was popularized for describing energy quanta, thanks to the 1923 work of Arthur Compton, who showed that light scattered by an electron beam changed in color. This showed that particles of light (photons) were indeed colliding with particles of matter (electrons), thus confirming Einstein's hypothesis. By now, it was clear that light could behave both as a wave and a particle, placing light's "wave-particle duality" into the foundation of QM.

Since the discovery of the electron in 1896, evidence that all matter existed in the form of particles was slowly building. Still, the demonstration of light's wave-particle duality made scientists question whether matter was limited to actingonlyas particles. Perhaps wave-particle duality could ring true for matter as well? The first scientist to make substantial headway with this reasoning was a French physicist named Louis de Broglie. In 1924, de Broglie used the equations of Einstein'stheory of special relativityto show that particles can exhibit wave-like characteristics, and that waves can exhibit particle-like characteristics. Then in 1925, two scientists, working independently and using separate lines of mathematical thinking, applied de Broglie's reasoning to explain how electrons whizzed around in atoms (a phenomenon that was unexplainable using the equations ofclassical mechanics). In Germany, physicist Werner Heisenberg (teaming with Max Born and Pascual Jordan) accomplished this by developing "matrix mechanics." Austrian physicist ErwinSchrdingerdeveloped a similar theory called "wave mechanics." Schrdinger showed in 1926 that these two approaches were equivalent (though Swiss physicist Wolfgang Pauli sent anunpublished resultto Jordan showing that matrix mechanics was more complete).

The Heisenberg-Schrdinger model of the atom, in which each electron acts as a wave (sometimes referred to as a "cloud") around the nucleus of an atom replaced the Rutherford-Bohr model. One stipulation of the new model was that the ends of the wave that forms an electron must meet. In "Quantum Mechanics in Chemistry, 3rd Ed." (W.A. Benjamin, 1981), Melvin Hanna writes, "The imposition of the boundary conditions has restricted the energy to discrete values." A consequence of this stipulation is that only whole numbers of crests and troughs are allowed, which explains why some properties are quantized. In the Heisenberg-Schrdinger model of the atom, electrons obey a "wave function" and occupy "orbitals" rather than orbits. Unlike the circular orbits of the Rutherford-Bohr model, atomic orbitals have a variety of shapes ranging from spheres to dumbbells to daisies.

In 1927, Walter Heitler and Fritz London further developed wave mechanics to show how atomic orbitals could combine to form molecular orbitals, effectively showing why atoms bond to one another to formmolecules. This was yet another problem that had been unsolvable using the math of classical mechanics. These insights gave rise to the field of "quantum chemistry."

Also in 1927, Heisenberg made another major contribution to quantum physics. He reasoned that since matter acts as waves, some properties, such as an electron's position and speed, are "complementary," meaning there's a limit (related to Planck's constant) to how well the precision of each property can be known. Under what would come to be called "Heisenberg'suncertainty principle," it was reasoned that the more precisely an electron's position is known, the less precisely its speed can be known, and vice versa. This uncertainty principle applies to everyday-size objects as well, but is not noticeable because the lack of precision is extraordinarily tiny. According to Dave Slaven of Morningside College (Sioux City, IA), if a baseball's speed is known to within aprecision of 0.1 mph, the maximum precision to which it is possible to know the ball's position is 0.000000000000000000000000000008 millimeters.

The principles of quantization, wave-particle duality and the uncertainty principle ushered in a new era for QM. In 1927, Paul Dirac applied a quantum understanding of electric and magnetic fields to give rise to the study of "quantum field theory" (QFT), which treated particles (such as photons and electrons) as excited states of an underlying physical field. Work in QFT continued for a decade until scientists hit a roadblock: Many equations in QFT stopped making physical sense because they produced results of infinity. After a decade of stagnation, Hans Bethe made a breakthrough in 1947 using a technique called "renormalization." Here, Bethe realized that all infinite results related to two phenomena (specifically "electron self-energy" and "vacuum polarization") such that the observed values of electron mass and electron charge could be used to make all the infinities disappear.

Since the breakthrough of renormalization, QFT has served as the foundation for developing quantum theories about the four fundamental forces of nature: 1) electromagnetism, 2) the weak nuclear force, 3) the strong nuclear force and 4) gravity. The first insight provided by QFT was a quantum description of electromagnetism through "quantum electrodynamics" (QED), which made strides in the late 1940s and early 1950s. Next was a quantum description of the weak nuclear force, which was unified with electromagnetism to build "electroweak theory" (EWT) throughout the 1960s. Finally came a quantum treatment of the strong nuclear force using "quantum chromodynamics" (QCD) in the 1960s and 1970s. The theories of QED, EWT and QCD together form the basis of theStandard Modelof particle physics. Unfortunately, QFT has yet to produce a quantum theory of gravity. That quest continues today in the studies of string theory and loop quantum gravity.

Robert Coolman is a graduate researcher at the University of Wisconsin-Madison, finishing up his Ph.D. in chemical engineering. He writes about math, science and how they interact with history. Follow Robert@PrimeViridian. Followus@LiveScience,Facebook&Google+.

Here is the original post:

What Is Quantum Mechanics? Quantum Physics Defined ...

Introduction to quantum mechanics – Wikipedia

Non-technical introduction to quantum physics

Quantum mechanics is the science of the very small. It explains the behavior of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the Moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain.[1] The desire to resolve inconsistencies between observed phenomena and classical theory led to two major revolutions in physics that created a shift in the original scientific paradigm: the theory of relativity and the development of quantum mechanics.[2] This article describes how physicists discovered the limitations of classical physics and developed the main concepts of the quantum theory that replaced it in the early decades of the 20th century. It describes these concepts in roughly the order in which they were first discovered. For a more complete history of the subject, see History of quantum mechanics.

Light behaves in some aspects like particles and in other aspects like waves. Matterthe "stuff" of the universe consisting of particles such as electrons and atomsexhibits wavelike behavior too. Some light sources, such as neon lights, give off only certain specific frequencies of light, a small set of distinct pure colors determined by neon's atomic structure. Quantum mechanics shows that light, along with all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its spectral energies (corresponding to pure colors), and the intensities of its light beams. A single photon is a quantum, or smallest observable particle, of the electromagnetic field. A partial photon is never experimentally observed. More broadly, quantum mechanics shows that many properties of objects, such as position, speed, and angular momentum, that appeared continuous in the zoomed-out view of classical mechanics, turn out to be (in the very tiny, zoomed-in scale of quantum mechanics) quantized. Such properties of elementary particles are required to take on one of a set of small, discrete allowable values, and since the gap between these values is also small, the discontinuities are only apparent at very tiny (atomic) scales.

Many aspects of quantum mechanics are counterintuitive[3] and can seem paradoxical because they describe behavior quite different from that seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She isabsurd".[4]

For example, the uncertainty principle of quantum mechanics means that the more closely one pins down one measurement (such as the position of a particle), the less accurate another complementary measurement pertaining to the same particle (such as its speed) must become.

Another example is entanglement, in which a measurement of any two-valued state of a particle (such as light polarized up or down) made on either of two "entangled" particles that are very far apart causes a subsequent measurement on the other particle to always be the other of the two values (such as polarized in the opposite direction).

A final example is superfluidity, in which a container of liquid helium, cooled down to near absolute zero in temperature spontaneously flows (slowly) up and over the opening of its container, against the force of gravity.

Thermal radiation is electromagnetic radiation emitted from the surface of an object due to the object's internal energy. If an object is heated sufficiently, it starts to emit light at the red end of the spectrum, as it becomes red hot.

Heating it further causes the color to change from red to yellow, white, and blue, as it emits light at increasingly shorter wavelengths (higher frequencies). A perfect emitter is also a perfect absorber: when it is cold, such an object looks perfectly black, because it absorbs all the light that falls on it and emits none. Consequently, an ideal thermal emitter is known as a black body, and the radiation it emits is called black-body radiation.

In the late 19th century, thermal radiation had been fairly well characterized experimentally.[note 1] However, classical physics led to the RayleighJeans law, which, as shown in the figure, agrees with experimental results well at low frequencies, but strongly disagrees at high frequencies. Physicists searched for a single theory that explained all the experimental results.

The first model that was able to explain the full spectrum of thermal radiation was put forward by Max Planck in 1900.[5] He proposed a mathematical model in which the thermal radiation was in equilibrium with a set of harmonic oscillators. To reproduce the experimental results, he had to assume that each oscillator emitted an integer number of units of energy at its single characteristic frequency, rather than being able to emit any arbitrary amount of energy. In other words, the energy emitted by an oscillator was quantized.[note 2] The quantum of energy for each oscillator, according to Planck, was proportional to the frequency of the oscillator; the constant of proportionality is now known as the Planck constant. The Planck constant, usually written as h, has the value of 6.631034J s. So, the energy E of an oscillator of frequency f is given by

To change the color of such a radiating body, it is necessary to change its temperature. Planck's law explains why: increasing the temperature of a body allows it to emit more energy overall, and means that a larger proportion of the energy is towards the violet end of the spectrum.

Planck's law was the first quantum theory in physics, and Planck won the Nobel Prize in 1918 "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".[7] At the time, however, Planck's view was that quantization was purely a heuristic mathematical construct, rather than (as is now believed) a fundamental change in our understanding of the world.[8]

In 1905, Albert Einstein took an extra step. He suggested that quantization was not just a mathematical construct, but that the energy in a beam of light actually occurs in individual packets, which are now called photons.[9] The energy of a single photon of light of frequency f {displaystyle f} is given by the frequency multiplied by Planck's constant h {displaystyle h} (an extremely tiny positive number):

For centuries, scientists had debated between two possible theories of light: was it a wave or did it instead comprise a stream of tiny particles? By the 19th century, the debate was generally considered to have been settled in favor of the wave theory, as it was able to explain observed effects such as refraction, diffraction, interference, and polarization.[10] James Clerk Maxwell had shown that electricity, magnetism and light are all manifestations of the same phenomenon: the electromagnetic field. Maxwell's equations, which are the complete set of laws of classical electromagnetism, describe light as waves: a combination of oscillating electric and magnetic fields. Because of the preponderance of evidence in favor of the wave theory, Einstein's ideas were met initially with great skepticism. Eventually, however, the photon model became favored. One of the most significant pieces of evidence in its favor was its ability to explain several puzzling properties of the photoelectric effect, described in the following section. Nonetheless, the wave analogy remained indispensable for helping to understand other characteristics of light: diffraction, refraction, and interference.

In 1887, Heinrich Hertz observed that when light with sufficient frequency hits a metallic surface, the surface emits electrons.[11] In 1902, Philipp Lenard discovered that the maximum possible energy of an ejected electron is related to the frequency of the light, not to its intensity: if the frequency is too low, no electrons are ejected regardless of the intensity. Strong beams of light toward the red end of the spectrum might produce no electrical potential at all, while weak beams of light toward the violet end of the spectrum would produce higher and higher voltages. The lowest frequency of light that can cause electrons to be emitted, called the threshold frequency, is different for different metals. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the incident radiation.[12]:24 So when physicists first discovered devices exhibiting the photoelectric effect, they initially expected that a higher intensity of light would produce a higher voltage from the photoelectric device.

Einstein explained the effect by postulating that a beam of light is a stream of particles ("photons") and that, if the beam is of frequency f, then each photon has an energy equal to hf.[11] An electron is likely to be struck only by a single photon, which imparts at most an energy hf to the electron.[11] Therefore, the intensity of the beam has no effect[note 3] and only its frequency determines the maximum energy that can be imparted to the electron.[11]

To explain the threshold effect, Einstein argued that it takes a certain amount of energy, called the work function and denoted by , to remove an electron from the metal.[11] This amount of energy is different for each metal. If the energy of the photon is less than the work function, then it does not carry sufficient energy to remove the electron from the metal. The threshold frequency, f0, is the frequency of a photon whose energy is equal to the work function:

If f is greater than f0, the energy hf is enough to remove an electron. The ejected electron has a kinetic energy, EK, which is, at most, equal to the photon's energy minus the energy needed to dislodge the electron from the metal:

Einstein's description of light as being composed of particles extended Planck's notion of quantized energy, which is that a single photon of a given frequency, f, delivers an invariant amount of energy, hf. In other words, individual photons can deliver more or less energy, but only depending on their frequencies. In nature, single photons are rarely encountered. The Sun and emission sources available in the 19th century emit vast numbers of photons every second, and so the importance of the energy carried by each individual photon was not obvious. Einstein's idea that the energy contained in individual units of light depends on their frequency made it possible to explain experimental results that had seemed counterintuitive. However, although the photon is a particle, it was still being described as having the wave-like property of frequency. Effectively, the account of light as a particle is insufficient, and its wave-like nature is still required.[13][note 4]

The relationship between the frequency of electromagnetic radiation and the energy of each individual photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light delivers a high amount of energyenough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light delivers less energyonly enough to warm one's skin. So, an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn.[15]

All photons of the same frequency have identical energy, and all photons of different frequencies have proportionally (order 1, Ephoton = hf ) different energies.[16] However, although the energy imparted by photons is invariant at any given frequency, the initial energy state of the electrons in a photoelectric device prior to absorption of light is not necessarily uniform. Anomalous results may occur in the case of individual electrons. For instance, an electron that was already excited above the equilibrium level of the photoelectric device might be ejected when it absorbed uncharacteristically low frequency illumination. Statistically, however, the characteristic behavior of a photoelectric device reflects the behavior of the vast majority of its electrons, which are at their equilibrium level. This point is helpful in comprehending the distinction between the study of individual particles in quantum dynamics and the study of massive particles in classical physics.[citation needed]

By the dawn of the 20th century, evidence required a model of the atom with a diffuse cloud of negatively charged electrons surrounding a small, dense, positively charged nucleus. These properties suggested a model in which electrons circle around the nucleus like planets orbiting a sun.[note 5] However, it was also known that the atom in this model would be unstable: according to classical theory, orbiting electrons are undergoing centripetal acceleration, and should therefore give off electromagnetic radiation, the loss of energy also causing them to spiral toward the nucleus, colliding with it in a fraction of a second.

A second, related puzzle was the emission spectrum of atoms. When a gas is heated, it gives off light only at discrete frequencies. For example, the visible light given off by hydrogen consists of four different colors, as shown in the picture below. The intensity of the light at different frequencies is also different. By contrast, white light consists of a continuous emission across the whole range of visible frequencies. By the end of the nineteenth century, a simple rule known as Balmer's formula showed how the frequencies of the different lines related to each other, though without explaining why this was, or making any prediction about the intensities. The formula also predicted some additional spectral lines in ultraviolet and infrared light that had not been observed at the time. These lines were later observed experimentally, raising confidence in the value of the formula.

The mathematical formula describing hydrogen's emission spectrum

In 1885 the Swiss mathematician Johann Balmer discovered that each wavelength (lambda) in the visible spectrum of hydrogen is related to some integer n by the equation

where B is a constant Balmer determined is equal to 364.56nm.

In 1888 Johannes Rydberg generalized and greatly increased the explanatory utility of Balmer's formula. He predicted that is related to two integers n and m according to what is now known as the Rydberg formula:[17]

where R is the Rydberg constant, equal to 0.0110nm1, and n must be greater than m.

Rydberg's formula accounts for the four visible wavelengths of hydrogen by setting m = 2 and n = 3, 4, 5, 6. It also predicts additional wavelengths in the emission spectrum: for m = 1 and for n > 1, the emission spectrum should contain certain ultraviolet wavelengths, and for m = 3 and n > 3, it should also contain certain infrared wavelengths. Experimental observation of these wavelengths came two decades later: in 1908 Louis Paschen found some of the predicted infrared wavelengths, and in 1914 Theodore Lyman found some of the predicted ultraviolet wavelengths.[17]

Both Balmer and Rydberg's formulas involve integers: in modern terms, they imply that some property of the atom is quantized. Understanding exactly what this property was, and why it was quantized, was a major part in the development of quantum mechanics, as shown in the rest of this article.

In 1913 Niels Bohr proposed a new model of the atom that included quantized electron orbits: electrons still orbit the nucleus much as planets orbit around the sun, but they are permitted to inhabit only certain orbits, not to orbit at any arbitrary distance.[18] When an atom emitted (or absorbed) energy, the electron did not move in a continuous trajectory from one orbit around the nucleus to another, as might be expected classically. Instead, the electron would jump instantaneously from one orbit to another, giving off the emitted light in the form of a photon.[19] The possible energies of photons given off by each element were determined by the differences in energy between the orbits, and so the emission spectrum for each element would contain a number of lines.[20]

Starting from only one simple assumption about the rule that the orbits must obey, the Bohr model was able to relate the observed spectral lines in the emission spectrum of hydrogen to previously known constants. In Bohr's model the electron was not allowed to emit energy continuously and crash into the nucleus: once it was in the closest permitted orbit, it was stable forever. Bohr's model didn't explain why the orbits should be quantized in that way, nor was it able to make accurate predictions for atoms with more than one electron, or to explain why some spectral lines are brighter than others.

Some fundamental assumptions of the Bohr model were soon proven wrongbut the key result that the discrete lines in emission spectra are due to some property of the electrons in atoms being quantized is correct. The way that the electrons actually behave is strikingly different from Bohr's atom, and from what we see in the world of our everyday experience; this modern quantum mechanical model of the atom is discussed below.

A more detailed explanation of the Bohr model

Bohr theorized that the angular momentum, L, of an electron is quantized:

where n is an integer and h is the Planck constant. Starting from this assumption, Coulomb's law and the equations of circular motion show that an electron with n units of angular momentum orbit a proton at a distance r given by

where ke is the Coulomb constant, m is the mass of an electron, and e is the charge on an electron.For simplicity this is written as

where a0, called the Bohr radius, is equal to 0.0529nm.The Bohr radius is the radius of the smallest allowed orbit.

The energy of the electron[note 6] can also be calculated, and is given by

Thus Bohr's assumption that angular momentum is quantized means that an electron can inhabit only certain orbits around the nucleus, and that it can have only certain energies. A consequence of these constraints is that the electron does not crash into the nucleus: it cannot continuously emit energy, and it cannot come closer to the nucleus than a0 (the Bohr radius).

An electron loses energy by jumping instantaneously from its original orbit to a lower orbit; the extra energy is emitted in the form of a photon. Conversely, an electron that absorbs a photon gains energy, hence it jumps to an orbit that is farther from the nucleus.

Each photon from glowing atomic hydrogen is due to an electron moving from a higher orbit, with radius rn, to a lower orbit, rm. The energy E of this photon is the difference in the energies En and Em of the electron:

Since Planck's equation shows that the photon's energy is related to its wavelength by E = hc/, the wavelengths of light that can be emitted are given by

This equation has the same form as the Rydberg formula, and predicts that the constant R should be given by

Therefore, the Bohr model of the atom can predict the emission spectrum of hydrogen in terms of fundamental constants.[note 7] However, it was not able to make accurate predictions for multi-electron atoms, or to explain why some spectral lines are brighter than others.

Just as light has both wave-like and particle-like properties, matter also has wave-like properties.[21]

Matter behaving as a wave was first demonstrated experimentally for electrons: a beam of electrons can exhibit diffraction, just like a beam of light or a water wave.[note 8] Similar wave-like phenomena were later shown for atoms and even molecules.

The wavelength, , associated with any object is related to its momentum, p, through the Planck constant, h:[22][23]

The relationship, called the de Broglie hypothesis, holds for all types of matter: all matter exhibits properties of both particles and waves.

The concept of waveparticle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Waveparticle duality is an example of the principle of complementarity in quantum physics.[24][25][26][27][28] An elegant example of waveparticle duality, the double slit experiment, is discussed in the section below.

In the double-slit experiment, as originally performed by Thomas Young in 1803[29], and then Augustin Fresnel a decade later[29], a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. If one of the slits is covered up, one might navely expect that the intensity of the fringes due to interference would be halved everywhere. In fact, a much simpler pattern is seen, a diffraction pattern diametrically opposite the open slit. Exactly the same behavior can be demonstrated in water waves, and so the double-slit experiment was seen as a demonstration of the wave nature of light.

Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules,[30][31] and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses both particle and wave characteristics.

Even if the source intensity is turned down, so that only one particle (e.g. photon or electron) is passing through the apparatus at a time, the same interference pattern develops over time. The quantum particle acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum particle acts as a wave in an experiment to measure its wave-like properties, and like a particle in an experiment to measure its particle-like properties. The point on the detector screen where any individual particle shows up is the result of a random process. However, the distribution pattern of many individual particles mimics the diffraction pattern produced by waves.

De Broglie expanded the Bohr model of the atom by showing that an electron in orbit around a nucleus could be thought of as having wave-like properties. In particular, an electron is observed only in situations that permit a standing wave around a nucleus. An example of a standing wave is a violin string, which is fixed at both ends and can be made to vibrate. The waves created by a stringed instrument appear to oscillate in place, moving from crest to trough in an up-and-down motion. The wavelength of a standing wave is related to the length of the vibrating object and the boundary conditions. For example, because the violin string is fixed at both ends, it can carry standing waves of wavelengths 2 l n {displaystyle {frac {2l}{n}}} , where l is the length and n is a positive integer. De Broglie suggested that the allowed electron orbits were those for which the circumference of the orbit would be an integer number of wavelengths. The electron's wavelength therefore determines that only Bohr orbits of certain distances from the nucleus are possible. In turn, at any distance from the nucleus smaller than a certain value it would be impossible to establish an orbit. The minimum possible distance from the nucleus is called the Bohr radius.[32]

De Broglie's treatment of quantum events served as a starting point for Schrdinger when he set out to construct a wave equation to describe quantum theoretical events.

In 1922, Otto Stern and Walther Gerlach shot silver atoms through an inhomogeneous magnetic field. Relative to its northern pole, pointing up, down, or somewhere in between, in classical mechanics, a magnet thrown through a magnetic field may be deflected a small or large distance upwards or downwards. The atoms that Stern and Gerlach shot through the magnetic field acted in a similar way. However, while the magnets could be deflected variable distances, the atoms would always be deflected a constant distance either up or down. This implied that the property of the atom that corresponds to the magnet's orientation must be quantized, taking one of two values (either up or down), as opposed to being chosen freely from any angle.

Ralph Kronig originated the theory that particles such as atoms or electrons behave as if they rotate, or "spin", about an axis. Spin would account for the missing magnetic moment,[clarification needed] and allow two electrons in the same orbital to occupy distinct quantum states if they "spun" in opposite directions, thus satisfying the exclusion principle. The quantum number represented the sense (positive or negative) of spin.

The choice of orientation of the magnetic field used in the SternGerlach experiment is arbitrary. In the animation shown here, the field is vertical and so the atoms are deflected either up or down. If the magnet is rotated a quarter turn, the atoms are deflected either left or right. Using a vertical field shows that the spin along the vertical axis is quantized, and using a horizontal field shows that the spin along the horizontal axis is quantized.

If, instead of hitting a detector screen, one of the beams of atoms coming out of the SternGerlach apparatus is passed into another (inhomogeneous) magnetic field oriented in the same direction, all of the atoms are deflected the same way in this second field. However, if the second field is oriented at 90 to the first, then half of the atoms are deflected one way and half the other, so that the atom's spin about the horizontal and vertical axes are independent of each other. However, if one of these beams (e.g. the atoms that were deflected up then left) is passed into a third magnetic field, oriented the same way as the first, half of the atoms go one way and half the other, even though they all went in the same direction originally. The action of measuring the atoms' spin with respect to a horizontal field has changed their spin with respect to a vertical field.

The SternGerlach experiment demonstrates a number of important features of quantum mechanics:

In 1925, Werner Heisenberg attempted to solve one of the problems that the Bohr model left unanswered, explaining the intensities of the different lines in the hydrogen emission spectrum. Through a series of mathematical analogies, he wrote out the quantum-mechanical analog for the classical computation of intensities.[33] Shortly afterwards, Heisenberg's colleague Max Born realised that Heisenberg's method of calculating the probabilities for transitions between the different energy levels could best be expressed by using the mathematical concept of matrices.[note 9]

In the same year, building on de Broglie's hypothesis, Erwin Schrdinger developed the equation that describes the behavior of a quantum-mechanical wave.[34] The mathematical model, called the Schrdinger equation after its creator, is central to quantum mechanics, defines the permitted stationary states of a quantum system, and describes how the quantum state of a physical system changes in time.[35] The wave itself is described by a mathematical function known as a "wave function". Schrdinger said that the wave function provides the "means for predicting probability of measurement results".[36]

Schrdinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a classical wave, moving in a well of electrical potential created by the proton. This calculation accurately reproduced the energy levels of the Bohr model.

In May 1926, Schrdinger proved that Heisenberg's matrix mechanics and his own wave mechanics made the same predictions about the properties and behavior of the electron; mathematically, the two theories had an underlying common form. Yet the two men disagreed on the interpretation of their mutual theory. For instance, Heisenberg accepted the theoretical prediction of jumps of electrons between orbitals in an atom,[37] but Schrdinger hoped that a theory based on continuous wave-like properties could avoid what he called (as paraphrased by Wilhelm Wien) "this nonsense about quantum jumps".[38] In the end, Heisenberg's approach won out, and quantum jumps were confirmed.[39]

Bohr, Heisenberg, and others tried to explain what these experimental results and mathematical models really mean. Their description, known as the Copenhagen interpretation of quantum mechanics, aimed to describe the nature of reality that was being probed by the measurements and described by the mathematical formulations of quantum mechanics.

The main principles of the Copenhagen interpretation are:

Various consequences of these principles are discussed in more detail in the following subsections.

Suppose it is desired to measure the position and speed of an objectfor example a car going through a radar speed trap. It can be assumed that the car has a definite position and speed at a particular moment in time. How accurately these values can be measured depends on the quality of the measuring equipment. If the precision of the measuring equipment is improved, it provides a result closer to the true value. It might be assumed that the speed of the car and its position could be operationally defined and measured simultaneously, as precisely as might be desired.

In 1927, Heisenberg proved that this last assumption is not correct.[41] Quantum mechanics shows that certain pairs of physical properties, for example position and speed, cannot be simultaneously measured, nor defined in operational terms, to arbitrary precision: the more precisely one property is measured, or defined in operational terms, the less precisely can the other. This statement is known as the uncertainty principle. The uncertainty principle is not only a statement about the accuracy of our measuring equipment, but, more deeply, is about the conceptual nature of the measured quantitiesthe assumption that the car had simultaneously defined position and speed does not work in quantum mechanics. On a scale of cars and people, these uncertainties are negligible, but when dealing with atoms and electrons they become critical.[42]

Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon, the more accurate is the measurement of the position of the impact of the photon with the electron, but the greater is the disturbance of the electron. This is because from the impact with the photon, the electron absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain (momentum is velocity multiplied by mass), for one is necessarily measuring its post-impact disturbed momentum from the collision products and not its original momentum. With a photon of lower frequency, the disturbance (and hence uncertainty) in the momentum is less, but so is the accuracy of the measurement of the position of the impact.[43]

At the heart of the uncertainty principle is not a mystery, but the simple fact that for any mathematical analysis in the position and velocity domains (Fourier analysis), achieving a sharper (more precise) curve in the position domain can only be done at the expense of a more gradual (less precise) curve in the speed domain, and vice versa. More sharpness in the position domain requires contributions from more frequencies in the speed domain to create the narrower curve, and vice versa. It is a fundamental tradeoff inherent in any such related or complementary measurements, but is only really noticeable at the smallest (Planck) scale, near the size of elementary particles.

The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to Planck's constant.

Wave function collapse means that a measurement has forced or converted a quantum (probabilistic or potential) state into a definite measured value. This phenomenon is only seen in quantum mechanics rather than classical mechanics.

For example, before a photon actually "shows up" on a detection screen it can be described only with a set of probabilities for where it might show up. When it does appear, for instance in the CCD of an electronic camera, the time and the space where it interacted with the device are known within very tight limits. However, the photon has disappeared in the process of being captured (measured), and its quantum wave function has disappeared with it. In its place some macroscopic physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD.

Because of the uncertainty principle, statements about both the position and momentum of particles can assign only a probability that the position or momentum has some numerical value. Therefore, it is necessary to formulate clearly the difference between the state of something that is indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate.

In the SternGerlach experiment discussed above, the spin of the atom about the vertical axis has two eigenstates: up and down. Before measuring it, we can only say that any individual atom has equal probability of being found to have spin up or spin down. The measurement process causes the wavefunction to collapse into one of the two states.

The eigenstates of spin about the vertical axis are not simultaneously eigenstates of spin about the horizontal axis, so this atom has equal probability of being found to have either value of spin about the horizontal axis. As described in the section above, measuring the spin about the horizontal axis can allow an atom that was spun up to spin down: measuring its spin about the horizontal axis collapses its wave function into one of the eigenstates of this measurement, which means it is no longer in an eigenstate of spin about the vertical axis, so can take either value.

In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating, "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers."[44]

A year later, Uhlenbeck and Goudsmit identified Pauli's new degree of freedom with the property called spin whose effects were observed in the SternGerlach experiment.

Bohr's model of the atom was essentially a planetary one, with the electrons orbiting around the nuclear "sun". However, the uncertainty principle states that an electron cannot simultaneously have an exact location and velocity in the way that a planet does. Instead of classical orbits, electrons are said to inhabit atomic orbitals. An orbital is the "cloud" of possible locations in which an electron might be found, a distribution of probabilities rather than a precise location.[44] Each orbital is three dimensional, rather than the two dimensional orbit, and is often depicted as a three-dimensional region within which there is a 95 percent probability of finding the electron.[45]

Schrdinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a wave, represented by the "wave function" , in an electric potential well, V, created by the proton. The solutions to Schrdinger's equation[clarification needed] are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model.

Within Schrdinger's picture, each electron has four properties:

The collective name for these properties is the quantum state of the electron. The quantum state can be described by giving a number to each of these properties; these are known as the electron's quantum numbers. The quantum state of the electron is described by its wave function. The Pauli exclusion principle demands that no two electrons within an atom may have the same values of all four numbers.

The first property describing the orbital is the principal quantum number, n, which is the same as in Bohr's model. n denotes the energy level of each orbital. The possible values for n are integers:

The next quantum number, the azimuthal quantum number, denoted l, describes the shape of the orbital. The shape is a consequence of the angular momentum of the orbital. The angular momentum represents the resistance of a spinning object to speeding up or slowing down under the influence of external force. The azimuthal quantum number represents the orbital angular momentum of an electron around its nucleus. The possible values for l are integers from 0 to n 1 (where n is the principal quantum number of the electron):

The shape of each orbital is usually referred to by a letter, rather than by its azimuthal quantum number. The first shape (l=0) is denoted by the letter s (a mnemonic being "sphere"). The next shape is denoted by the letter p and has the form of a dumbbell. The other orbitals have more complicated shapes (see atomic orbital), and are denoted by the letters d, f, g, etc.

The third quantum number, the magnetic quantum number, describes the magnetic moment of the electron, and is denoted by ml (or simply m). The possible values for ml are integers from l to l (where l is the azimuthal quantum number of the electron):

The magnetic quantum number measures the component of the angular momentum in a particular direction. The choice of direction is arbitrary; conventionally the z-direction is chosen.

The fourth quantum number, the spin quantum number (pertaining to the "orientation" of the electron's spin) is denoted ms, with values +12 or 12.

The chemist Linus Pauling wrote, by way of example:

In the case of a helium atom with two electrons in the 1s orbital, the Pauli Exclusion Principle requires that the two electrons differ in the value of one quantum number. Their values of n, l, and ml are the same. Accordingly they must differ in the value of ms, which can have the value of +12 for one electron and 12 for the other."[44]

It is the underlying structure and symmetry of atomic orbitals, and the way that electrons fill them, that leads to the organisation of the periodic table. The way the atomic orbitals on different atoms combine to form molecular orbitals determines the structure and strength of chemical bonds between atoms.

In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account for special relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of the speed of light. By using the simplest electromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin, and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom, and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum.

Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and of a dynamical vacuum. This led to the many-particle quantum field theory.

The Pauli exclusion principle says that two electrons in one system cannot be in the same state. Nature leaves open the possibility, however, that two electrons can have both states "superimposed" over each of them. Recall that the wave functions that emerge simultaneously from the double slits arrive at the detection screen in a state of superposition. Nothing is certain until the superimposed waveforms "collapse". At that instant an electron shows up somewhere in accordance with the probability that is the square of the absolute value of the sum of the complex-valued amplitudes of the two superimposed waveforms. The situation there is already very abstract. A concrete way of thinking about entangled photons, photons in which two contrary states are superimposed on each of them in the same event, is as follows:

Imagine that we have two color-coded states of photons: one state labeled blue and another state labeled red. Let the superposition of the red and the blue state appear (in imagination) as a purple state. We consider a case in which two photons are produced as the result of one single atomic event. Perhaps they are produced by the excitation of a crystal that characteristically absorbs a photon of a certain frequency and emits two photons of half the original frequency. In this case, the photons are connected with each other via their shared origin in a single atomic event. This setup results in superimposed states of the photons. So the two photons come out purple. If the experimenter now performs some experiment that determines whether one of the photons is either blue or red, then that experiment changes the photon involved from one having a superposition of blue and red characteristics to a photon that has only one of those characteristics. The problem that Einstein had with such an imagined situation was that if one of these photons had been kept bouncing between mirrors in a laboratory on earth, and the other one had traveled halfway to the nearest star, when its twin was made to reveal itself as either blue or red, that meant that the distant photon now had to lose its purple status too. So whenever it might be investigated after its twin had been measured, it would necessarily show up in the opposite state to whatever its twin had revealed.

In trying to show that quantum mechanics was not a complete theory, Einstein started with the theory's prediction that two or more particles that have interacted in the past can appear strongly correlated when their various properties are later measured. He sought to explain this seeming interaction in a classical way, through their common past, and preferably not by some "spooky action at a distance". The argument is worked out in a famous paper, Einstein, Podolsky, and Rosen (1935; abbreviated EPR), setting out what is now called the EPR paradox. Assuming what is now usually called local realism, EPR attempted to show from quantum theory that a particle has both position and momentum simultaneously, while according to the Copenhagen interpretation, only one of those two properties actually exists and only at the moment that it is being measured. EPR concluded that quantum theory is incomplete in that it refuses to consider physical properties that objectively exist in nature. (Einstein, Podolsky, & Rosen 1935 is currently Einstein's most cited publication in physics journals.) In the same year, Erwin Schrdinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics."[46] Ever since Irish physicist John Stewart Bell theoretically and experimentally disproved the "hidden variables" theory of Einstein, Podolsky, and Rosen, most physicists have accepted entanglement as a real phenomenon.[47] However, there is some minority dispute.[48] The Bell inequalities are the most powerful challenge to Einstein's claims.

The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantize the electromagnetic field[clarification needed] a procedure for constructing a quantum theory starting from a classical theory.

Merriam-Webster defines a field in physics as "a region or space in which a given effect (such as magnetism) exists".[49] Other effects that manifest themselves as fields are gravitation and static electricity.[50] In 2008, physicist Richard Hammond wrote:

Read more here:

Introduction to quantum mechanics - Wikipedia

Unified Field Theory: Einstein Failed, but What’s the Future? – The Great Courses Daily News

By Dan Hooper, Ph.D., University of Chicago The String theory is considered as one of the future unified field theories. (Image: Natali Art collections/Shutterstock)Einsteins First Attempt at Unified Field Theory

In 1923, Einstein published a series of papers that built upon and expanded on Eddingtons work of affine connection. Later in the same year, he wrote another paper, in which he argued that this theory might make it possible to restore determinism to quantum physics.

These papers of Einstein were covered enthusiastically by the press since he was the only living scientist that was a household name. Although few journalists really understood the theory that Einstein was putting forth, they did understand that Einstein was proposing something potentially very important.

But unfortunately, it was not true. Few of Einsteins colleagues were impressed by this work. And within a couple of years, even Einstein accepted that his approach was deeply flawed. If Einstein was going to find a viable unified field theory, he would have to find another way of approaching the problem.

Learn more about Einstein and gravitational waves.

Einsteins next major effort in this direction came in the late 1920s. This new approach was based on an idea known as distant parallelism. This approach was very mathematically complex as Einstein treated both the metric tensor and the affine connection as fundamental quantities in this approach, trying to take full advantage of both.

Once again, the press responded enthusiastically. But again, Einsteins colleagues did not. One reason for this was that Einstein was trying to build a theory that would unify general relativity with Maxwells theory of electromagnetism. But over the course of the 1920s, Maxwells classical theory had been replaced by the new quantum theory. Although Maxwells equations are still useful today, they are really only an approximation to the true quantum nature of the universe.

For this reason, many physicists saw Einsteins efforts to unify classical electromagnetism with general relativity as old-fashioned. Einstein seems to have been hoping that quantum mechanics was just a fad. But he was dead wrong. Quantum mechanics was here to stay.

This is a transcript from the video series What Einstein Got Wrong. Watch it now, on The Great Courses Plus.

In the years that followed, Einstein continued to explore different approaches in his unified field theory. He worked extensively with five-dimensional theories throughout much of the 1930s, then moved on to a number of other ideas during the 1940s and 50s. But none of these approaches ever attempted to incorporate quantum mechanics.

In his thirty-year search for unified field theory, Einstein never found anything that could reasonably be called a success. Over these three decades, Einsteins fixation on classical field theories, and his rejection of quantum mechanics, increasingly isolated him from the larger physics community.

There were fewer and fewer thought experiments, and Einsteins physical intuition, once so famous, was pushed aside and replaced by endless pages of complicated interplaying equations. Even during the last days of his life, Einstein continued his search for the unified field theory, but nothing of consequence ever came of it.

When Einstein died in 1955, he was really no closer to a unified field theory than he was thirty years before.

Learn more about quantum entanglement.

In recent decades, physicists have once again become interested in theories that could potentially combine and unify multiple facets of nature. In spirit, these theories have a lot in common with Einsteins dream of a unified field theory. But, in other ways, they are very different. For one thing, many important discoveries have been made since Einsteins death. And these discoveries have significantly changed how physicists view the prospect of building a unified field theory.

Einstein was entirely focused on electromagnetism and gravity, but physicists since then have discovered two new forces that exist in naturethe weak and strong nuclear forces. The strong nuclear force is the force that holds protons and neutrons together within the nuclei of atoms. And the weak nuclear force is responsible for certain radioactive decays, and for the process of nuclear fission.

Electromagnetism has a lot in common with these strong and weak nuclear forces. And it is not particularly hardat least in principleto construct theories in which these phenomena are unified into a single framework. Such theories are known as grand unified theories, or GUTs for short. And since their inception in the 1970s, a number of different grand unified theories have been proposed.

Grand unified theories are incredibly powerful, and in principle, they can predict and explain a huge range of phenomena. But they are also very hard to test and explore experimentally. Its not that these theories are untestable in principle. If one could build a big enough particle accelerator, one could almost certainly find out exactly how these three forces fit together into a grand unified theory.

But with the kinds of experiments we currently know how to buildand the kinds of experiments that we can afford to buildits just not possible to test most grand unified theories. There are, however, possible exceptions to this. One is that most of these theories predict that protons should occasionally decay. This is the kind of phenomena that can be tested. So far the limited tests have not been able to prove the Proton decay, but in future bigger tests are planned which could validate these theories.

But even grand unified theories are not as far-reaching as the kinds of unified field theories that Einstein spent so much of his life searching for. Grand unified theories bring together electromagnetism with the strong and weak forces, but they dont connect these phenomena with general relativity. But modern physicists are also looking for theories that can combine general relativity with the other forces of nature.

We hope that such a theory could unify all four of the known forcesincluding gravity. And since the aim of such a theory is to describe all of the laws of physics that describe our universe, we call this theory a theory of everything.

Learn more about problems with time travel.

The focus today, though, is on how to merge the geometric effects of general relativity with the quantum mechanical nature of our world. What we are really searching for, is a quantum theory of gravity.

The most promising theories of quantum gravity explored so far have been found within the context of string theory. In string theory, fundamental objects are not point-like particles, but instead are extended objects, including one-dimensional strings.

Research into string theory has revealed a number of strange things. For example, it was discovered in the 1980s that string theories are only mathematically consistent if the universe contains extra spatial dimensionsextra dimensions that are similar in many respects to those originally proposed by Theodor Kaluza.

Althoughstring theory remains a major area of research in modern physics, there isstill much we dont understand about it. And we dont know for sure whether itwill ever lead to a viable theory of everything.

In many ways, these modern unified theories have very little in common with those explored by Einstein. But in spirit, they are trying to answer the same kinds of questions. They are each trying to explain as much about our world as possible, as simply as they possibly can.

Einsteins unified field theory was an attempt to unify the fundamental theories of electromagnetic and general relativity into a single theoretical framework.

There are at least 10 dimensions of space in string theory, in addition to time which is considered as the 11th dimension. Although some physicists believe there are more than 11 dimensions.

Gravity is not a dimension. Its a fundamental force that is visualized as a bend in space and time.

In everyday life, we encounter three known dimensions: height, width, and depth which are already known for centuries.

Read the original:

Unified Field Theory: Einstein Failed, but What's the Future? - The Great Courses Daily News

Free Will Astrology: May 6, 2020 – River Cities Reader

ARIES (March 21-April 19): According to Aries author and mythologist Joseph Campbell, "The quest for fire occurred not because anyone knew what the practical uses for fire would be, but because it was fascinating." He was referring to our early human ancestors, and how they stumbled upon a valuable addition to their culture because they were curious about a powerful phenomenon, not because they knew it would ultimately be so valuable. I invite you to be guided by a similar principle in the coming weeks, Aries. Unforeseen benefits may emerge during your investigation into flows and bursts that captivate your imagination.

TAURUS (April 20-May 20): "The future belongs to those who see possibilities before they become obvious," says businessperson and entrepreneur John Sculley. You Tauruses aren't renowned for such foresight. It's more likely to belong to Aries and Sagittarius people. Your tribe is more likely to specialize in doing the good work that turns others' bright visions into practical realities. But this Year of the Coronavirus could be an exception to the general rule. In the past three months as well as in the next six months, many of you Bulls have been and will continue to be catching glimpses of interesting possibilities before they become obvious. Give yourself credit for this knack. Be alert for what it reveals.

GEMINI (May 21-June 20): For 148 uninterrupted years, American militias and the American army waged a series of wars against the native peoples who lived on the continent before Europeans came. There were more than 70 conflicts that lasted from 1776 until 1924. If there is any long-term struggle or strife that even mildly resembles that situation in your own personal life, our Global Healing Crisis is a favorable time to call a truce and cultivate peace. Start now! It's a ripe and propitious time to end hostilities that have gone on too long.

CANCER (June 21-July 22): Novelist Marcel Proust was a sensitive, dreamy, emotional, self-protective, creative Cancerian. That may explain why he wasn't a good soldier. During his service in the French army, he was ranked 73rd in a squad of 74. On the other hand, his majestically intricate seven-volume novel In Search of Lost Time is a masterpiece one of the 20th Century's most influential literary works. In evaluating his success as a human being, should we emphasize his poor military performance and downplay his literary output? Of course not! Likewise, Cancerian, in the coming weeks I'd like to see you devote vigorous energy to appreciating what you do best and no energy at all to worrying about your inadequacies.

LEO (July 23-August 22): "Fortune resists half-hearted prayers," wrote the poet Ovid more than 2,000 years ago. I will add that Fortune also resists poorly formulated intentions, feeble vows, and sketchy plans especially now, during an historical turning point when the world is undergoing massive transformations. Luckily, I don't see those lapses being problems for you in the coming weeks, Leo. According to my analysis, you're primed to be clear and precise. Your willpower should be working with lucid grace. You'll have an enhanced ability to assess your assets and make smart plans for how to use them.

VIRGO (August 23-September 22): Last year the Baltimore Museum of Art announced it would acquire works exclusively from women artists in 2020. A male art critic complained, "That's unfair to male artists." Here's my reply: Among major permanent art collections in the U.S. and Europe, the work of women makes up five percent of the total. So what the Baltimore Museum did is a righteous attempt to rectify the existing excess. It's a just and fair way to address an unhealthy imbalance. In accordance with current omens and necessities, Virgo, I encourage you to perform a comparable correction in your personal sphere.

LIBRA (September 23-October 22): In the course of my life, I've met many sharp thinkers with advanced degrees from fine universities who are nonetheless stunted in their emotional intelligence. They may quote Shakespeare and discourse on quantum physics and explain the difference between the philosophies of Kant and Hegel, and yet have less skill in understanding the inner workings of human beings or in creating vibrant intimate relationships. Yet most of these folks are not extreme outliers. I've found that virtually all of us are smarter in our heads than we are in our hearts. The good news, Libra, is that our current Global Healing Crisis is an excellent time for you to play catch up. Do what poet Lawrence Ferlinghetti suggests: "Make your mind learn its way around the heart."

SCORPIO (October 23-November 21): Aphorist Aaron Haspel writes, "The less you are contradicted, the stupider you become. The more powerful you become, the less you are contradicted." Let's discuss how this counsel might be useful to you in the coming weeks. First of all, I suspect you will be countered and challenged more than usual, which will offer you rich opportunities to become smarter. Secondly, I believe you will become more powerful as long as you don't try to stop or discourage the influences that contradict you. In other words, you'll grow your personal authority and influence to the degree that you welcome opinions and perspectives that are not identical to yours.

SAGITTARIUS (November 22-December 21): "It's always too early to quit," wrote author Norman Vincent Peale. We should put his words into perspective, though. He preached "the power of positive thinking." He was relentless in his insistence that we can and should transcend discouragement and disappointment. So we should consider the possibility that he was overly enthusiastic in his implication that we should never give up. What do you think, Sagittarius? I'm guessing this will be an important question for you to consider in the coming weeks. It may be time to re-evaluate your previous thoughts on the matter and come up with a fresh perspective. For example, maybe it's right to give up on one project if it enables you to persevere in another.

CAPRICORN (December 22-January 19): The 16th-Century mystic nun Saint Teresa of Avila was renowned for being overcome with rapture during her spiritual devotions. At times she experienced such profound bliss through her union with God that she levitated off the ground. "Any real ecstasy is a sign you are moving in the right direction," she wrote. I hope that you will be periodically moving in that direction yourself during the coming weeks, Capricorn. Although it may seem odd advice to receive during our Global Healing Crisis, I really believe you should make appointments with euphoria, delight, and enchantment.

AQUARIUS (January 20-February 18): Grammy-winning musician and composer Pharrell Williams has expertise in the creative process. "If someone asks me what inspires me," he testifies, "I always say, 'That which is missing.'" According to my understanding of the astrological omens, you would benefit from making that your motto in the coming weeks. Our Global Healing Crisis is a favorable time to discover what's absent or empty or blank about your life, and then learn all you can from exploring it. I think you'll be glad to be shown what you didn't consciously realize was lost, omitted, or lacking.

PISCES (February 19-March 20): "I am doing my best to not become a museum of myself," declares poet Natalie Diaz. I think she means that she wants to avoid defining herself entirely by her past. She is exploring tricks that will help her keep from relying so much on her old accomplishments that she neglects to keep growing. Her goal is to be free of her history, not to be weighed down and limited by it. These would be worthy goals for you to work on in the coming weeks, Pisces. What would your first step be?

Experiment: To begin the next momentous healing, tell the simple, brave, and humble truth about yourself. Testify at FreeWillAstrology.com.

Original post:

Free Will Astrology: May 6, 2020 - River Cities Reader

Elon Musk and Grimes Named Their Baby X A-12, Which Must Mean SomethingRight? – Esquire

UPDATE 2: A few hours after Grimes provided a detailed explanation for the name of her baby, the father, Elon Musk chimed in to correct a slight error in her tweet. Grimes said the A-12 part of the baby's name, X A-12, came from the precursor to their favorite aircraft, the SR-71. Musk responded to her tweet, offering a correction: "SR-71, but yes," he wrote. To which Grimes responded, "I am recovering from surgery and barely alive so may my typos b forgiven but, damnit. That was meant to be profound."

UPDATE: Grimes has explained the meaning behind her and Elon Musk's baby, name, X A-12. As she wrote on twitter:

Original post below:

As every new parent knows, there is no greater feeling than looking into a newborn child's eyes, and then assigning it a name that reads like the phonetic spelling of the sound a dial up modem makes while connecting to the World Wide Web. Potentially, that is what Grimes and Elon Musk did in the wee hours of the morning when the couple welcomed a new child into the world. While Grimes has opted to let the child choose its gender, Elon Musk is a bit more ... well...

When asked what the child's name is, Musk also dropped this little nugget. The newest Musk heir is named X A-12 Musk. The baby looks like more of an X A-10 than an X A-12, but this is their choice to make as parents. The internet immediately began dunking on the couple, but Grimes and Elon? These are smart people. If you can build a car to space, you deserve a little more credit. One very complex Reddit theory explains that the symbol () could represent Ash, the A-12 could be representative of the Archangel design effort by Lockheed Martin, and the X is a placeholder. Meaning the actual name would be [placeholder] Ash Archangel Musk, which is quite possibly the most Elon Musk x Grimes collaboration that could ever happen.

But there could be any other number of explanations for this baby name. What if perhaps this really was inspired by the noise of a dial up modem connecting to CompuServe circa 1999? What if, in a real Muskian twist, that's not the baby's name, but just its make and model? Perhaps this is a quantum physics equation the child solved while in the womb?

What if this isn't actually the name of their child, and just a public joke they're playing to keep the actual name of their child private? We may never know. Welcome to the world, X A-12. If that is your real name, kindergarten is going to be rough.

Originally posted here:

Elon Musk and Grimes Named Their Baby X A-12, Which Must Mean SomethingRight? - Esquire

What Is Einstein’s Unified Field Theory? – The Great Courses Daily News

By Dan Hooper, Ph.D., University of Chicago Newtons law of universal gravitation is a unified theory of gravitation and elliptical orbits. (Image: Maksym Bondarenko/Shutterstock)

A unified field theory, Einstein hoped, would combine and merge the theory of general relativity with the theory of electromagnetism, fusing them together into a singular physical and mathematical framework. The theory that Einstein had hoped to discover would be far more powerful, and more far-reaching, than either of these individual theories could ever be alone.

Unification has played a very important role in the history of physics. In fact, arguably, many of the greatest accomplishments in physics are examples of unification. By unification in this context, it means uniting two or more ideas that were thought to be as completely distinct, proving their different aspects as the same underlying phenomenon.

Learn more about Einstein and gravitational waves.

Consider, for example, what we call gravity. Prior to Isaac Newton, gravity was seen as a force that pulls things downward, and toward the Earth. And also, independently, it had been shown by Johannes Kepler and others that planets followed elliptical orbits around the Sun. But at the time, no one knew why planets followed these orbits. It was just known that they did.

But Isaac Newton changed all of that. With his theory of universal gravitation, Newton combined or unified these seemingly very different phenomena with a single overarching principle. His proposal was that all kinds of mass attract one another with a strength proportional to their masses, and inversely proportional to the square of the distance separating them. With this simple relationship, Newton found an idea that could explain both why massive objects are pulled toward the Earth, and why planets, and moons and comets, for that matter, follow the trajectories that they do as they move through the solar system.

This is a transcript from the video series What Einstein Got Wrong. Watch it now, on The Great Courses Plus.

Another important example of unification in physics took place almost two hundred years later, in the 19th century. Prior to this, electricity and magnetism were conceived of as unrelated phenomena. Electricity, on the one hand, was responsible for things like lightning, and static charge. On the other hand, magnetism caused compass needles to point north. They were seen as entirely different forces, which acted in different ways, and acted on different things.

But by the mid-1800s, the work of physicists such as Michael Faraday and James Clerk Maxwell showed that electricity and magnetism were related. In fact, they discovered that a magnetic field is itself nothing more than a moving or changing electric field. In other words, magnetism is just electricity in motion. Its effects might seem different; but beneath it all, its really just another aspect of the same underlying thing. Today, physicists talk about electromagnetism as a singular aspect of nature, and from a modern perspective thats what it is.

But before the unifying work by Faraday and Maxwell, the phrase electromagnetism wouldnt have made any sense at all. Only after the unified theory of electromagnetism was discovered could one see any reason to think about electricity and magnetism as being connected to one another in any meaningful way.

Learn more about quantum entanglement.

Sometimes, when two ideas are found to be deeply connected, they reveal new and surprising things in the process. In the case of electromagnetism, the equations that were discovered to relate electricity and magnetism to one another also described the nature and behavior of light waves. We found that light was an electromagnetic wavea combination of oscillating electric and magnetic fields that move together through space.

Every time that physicists manage to successfully unify a set of seemingly unrelated phenomena, they are left with a more powerful theory. A unified theory can explain more, and do it with less. Newtons unified theory of gravity explains why objects fall down and why planets move in elliptical orbits. It explains all of this more simply than the theories that preceded it could. But in addition, Newtons theory can be used to understand and predict many things that the preceding theories simply couldnt.

From Keplers equations, one cant tell you how heavy a bowling ball will be on the moon, or predict the trajectory of a satellite around the Earth. Now, one can predict these things using Newtons theory because it unifies multiple ideas into one, it can explain more. Its more powerful.

Learn more about what Einstein got right: Special Relativity

Einstein was also looking for a similarly powerful theory. His quest for a unified field theory started only a couple of years after he completed his general theory of relativity, in or around 1918 or so. At the time, there were two fundamental theories that were central to how physicists understood their universe. One of these theories was Einsteins new general theory of relativity, which explained the phenomenon of gravity, and its relationship to space and time.

Then, there was the theory of electromagnetism, usually written as a set of four equations, known as Maxwells equations. Maxwells equations can be used to describe a wide range of phenomena associated with electricity and magnetism, including that of light. So from Einsteins perspective, there were two different facets of nature before him. Both of these theories were individually powerful and quite mathematically elegant.

Einstein admired James Clerk Maxwell a great deal, and he held Maxwells equations of electromagnetism in very high regard. Einstein also, of course, was quite fond of his own theory of general relativity. These two theories were among the greatest accomplishments in all of physics. As far as anyone could tell, the theories of general relativity and electromagnetism seemed to be basically unrelated to one another. But Einstein wasnt so sure that this was really the case.

After all, electricity and magnetism seemed to be unrelated until Faraday and Maxwell showed us otherwise. Einstein wanted to do something similar with general relativity and electromagnetism. Like many of the greatest physicists before him, Einstein wanted to make a more powerful and widely applicable theory, that could predict and explain more than its predecessors could.

A Unified Theory tries to unite in a single mathematical framework the electromagnetic and weak forces with the strong force or with the strong force and gravity.

The four main forces that can be unified into different unified theories are gravitation,electromagnetism, weak Interaction, and strong Interaction. The main fundamental forces are mediated by fields which result from the exchange of gauge bosons in the Standard Model of Quantum Field Theory.

The first successful classical Unified Field Theory was developed byJames Clerk Maxwell. He successfully combined electricity and magnetic field theory into Electromagnetic Field Theory and led the way forward for different unified theories.

Einstein, in the latter part of his career, wanted to unify the theories of general relativity and electromagnetic field into one unified theory.He wasnt able to achieve any significant success in this goal though.

Originally posted here:

What Is Einstein's Unified Field Theory? - The Great Courses Daily News

Early Research on Unified Field Theory – The Great Courses Daily News

By Dan Hooper, Ph.D., University of Chicago General relativity equations use a mathematical structure called metric tensors, which Hermann Weyl tried to incorporate in his Unified Field Theory. (Image: Photomontage/Shutterstock)Weyls Metric Tensor and Unified Field Theory

The first attempt at a unified field theory wasnt made directly by Einstein himself. Instead, it was by the German physicist and mathematician Hermann Weyl. However, Einstein and Weyl were in communication during this time, and they discussed some of the aspects of this problem together. So, at least to some extent, Einstein was involved.

From Weyls perspective, there was one central challenge that made it so hard to combine general relativity and electromagnetism into one unified field theory. This challenge was that general relativity is a theory of geometry, while electromagnetism is not. Maxwells equations described the forces that act on electrically charged particles. They dont involve any changes to the geometry of space or time.

Weyl felt that if he wanted to merge these two theories together into a common framework, he would need to find a new geometrical way to formulate the theory of electromagnetism. In general relativity, the geometry of space and time is described by a mathematical object called the metric tensor. A tensor is essentially a special kind of matrix or array of numbers.

In general relativity, the metric tensor is a 44 array of numbers, so it contains a total of sixteen entries. But of these sixteen quantities, six are redundant, so there are really only 10 independent numbers described by the metric tensor. And we need all 10 of these numbers just to describe the effects of gravity.

The problem in combining general relativity with electromagnetism is that when we incorporate electromagnetism we need at least four more numbers at every point in space. This made it hard to see how one could explain both gravity and electromagnetism in terms of geometry. There just arent enough numbers in the metric tensor to describe both gravity and electromagnetism at the same time.

To try to get around this problem, Weyl proposed a version of non-Euclidean geometry. In doing so, he argued that it was possible to construct a geometrical system that wasnt limited to the 10 independent numbers. In addition to those 10 numbers, Weyls version of the metric tensor contained other additional quantities. And Weyl hoped that these additional numbers could somehow encode the effects of electromagnetism.

The theory that Weyl ultimately came up with was very complicated. Although it was mathematically sound, physically, it just didnt make much sense. After a series of exchanges with Einstein, even Weyl became convinced that his work hadnt gotten them any closer to viable unified field theory.

This is a transcript from the video series What Einstein Got Wrong. Watch it now, on The Great Courses Plus.

Only a year later or so, another idea in this direction was proposed. This time by the mathematician Theodor Kaluza. Most people find Kaluzas idea to be pretty strange and surprising. What he proposed was a unified field theory in which the space and time of our universe arent limited to four, but five dimensions.

To see why a fifth dimension might be helpful in building a unified field theory, we need to remember metric tensor. A tensor is a 44 array of numbers, for a total of sixteen entries10 of which are independent of each other. But tensor is a 44 array of numbers only because it was formulated in four-dimensional spacetime. If spacetime is five-dimensional, then the metric tensor will be a 55 array of numbers, for a total of twenty-five entries.

After removing all of the redundant entries, the five-dimensional metric tensor contains fifteen independent quantities. 10 of these fifteen numbers are needed to describe gravity. And this leaves us with five others, which is more than enough to potentially encode the phenomena of electromagnetism.

There is, though, one immediate and obvious objection that one might raise to Kaluzas five-dimensional theory. As far as we can tell, our universe doesnt have a fifth dimension.

Fortunately, there is a way that a fifth dimension might be able to remain hidden in a system like Kaluzas. In this geometrical system, the fifth dimension isnt like the others. The three dimensions of space that we are familiar with are large, and as far as we know, they go on forever in any given direction. If there were an extra dimension like this, it would be impossible for us not to notice it.

But the fifth dimension being imagined by Kaluza doesnt go on forever. Instead, its wrapped up, or curled up, into a tiny circle. If something moved even a short distance along the direction of this fifth dimension, it would simply return to where it started. If the circumference of the fifth dimension is small enough, it would be almost impossible for us to perceive it.

It was in 1919 that Kaluza described his idea to Einstein for the first time. And despite the fact that there were significant problems with the 5-dimensional theory, Einstein liked it a great deal.

With Einsteins help, Kaluza managed to publish his theory a couple of years later, in 1921. And only a few weeks after that, Einstein himself wrote and published an article that investigated some of the aspects of similar five-dimensional unified field theories. But, despite the enthusiasm, it was pretty clear that there were serious problems with Kaluzas theory. Einstein, though, continued to work on this theory not because he thought it was a viable unified field theory, but because he thought it might lead to something more promising.

After all, while Einstein was developing general relativity, he went through several incorrect versions of the gravitational field equations before he found the right answer.

Learn more about Quantum Entanglement.

Another scientist who worked on unified field theories during this period of time was the famous astronomer and physicist Arthur Eddington. However, Eddington didnt focus on expanding the metric tensor. In fact, he didnt focus on the metric tensor at all. Instead, he focused on a different mathematical structure, known as the affine connection. In the end, Eddington didnt really get any closer than Weyl or Kaluza to building a viable unified field theory. But Eddingtons work was important because his approach was quite different, and along with Kaluza, Eddington probably had the most influence on Einsteins later efforts to develop such a theory.

Learn more about what Einstein got right: Special Relativity.

Einstein himself began to focus on unified field theories in the early 1920s. During this period of time, he remained enthusiastic about the work that had been earlier done by both Kaluza and Eddington. In fact, a lot of Einsteins early work in this area consisted of extending and building upon these earlier ideas.

Einstein was deeply enthusiastic about this program of exploration. Although in this respect, he was relatively isolated since most physicists didnt share his excitement. Quantum physics was developing rapidly, and that was occupying the bulk of the fields attention during this time.

Einstein was deeply unhappy with the developments occurring in quantum theory as it moved away from the predictive determinism. Einsteins views about quantum mechanics also served to bolster his interest in unified field theories.

In addition to unifying general relativity with electromagnetism, Einstein hoped that a unified field theory might also somehow be able to restore determinism and scientific realism to the quantum world.

Yes, its possible to have a unified field theory similar to that of James Clerk Maxwell who successfully combined electric and magnetic fields into Electromagnetic theory.

Unified field theory is an attempt to unify different fundamental forces and the relationships into a single theoretical framework. There have been many attempts at unified theories, some were successful, some failed.

James Clerk Maxwell was the first one to create a unified field theory. He also combined electric and magnetic fields into Electromagnetic theory.

The founding fathers of quantum theory are Niels Bohr, Max Planck, and, to a certain extent, Albert Einstein.

Excerpt from:

Early Research on Unified Field Theory - The Great Courses Daily News

Wolfram Physics Project Seeks Theory Of Everything; Is It Revelation Or Overstatement? – Hackaday

Stephen Wolfram, inventor of the Wolfram computational language and the Mathematica software, announced that he may have found a path to the holy grail of physics: A fundamental theory of everything. Even with the subjunctive, this is certainly a powerful statement that should be met with some skepticism.

What is considered a fundamental theory of physics? In our current understanding, there are four fundamental forces in nature: the electromagnetic force, the weak force, the strong force, and gravity. Currently, the description of these forces is divided into two parts: General Relativity (GR), describing the nature of gravity that dominates physics on astronomical scales. Quantum Field Theory (QFT) describes the other three forces and explains all of particle physics.

An overview of particle physics by Headbomb [CC-BY-SA 3.0]Up to now, it has not been possible to unify both General Relativity and Quantum Field Theory since they are formulated within different mathematical frameworks. In particular, treating gravity within the formalism of QFT leads to infinite terms that cannot be canceled out within the generally accepted framework of renormalization. The two most popular attempts to deliver a quantum mechanical description of gravity are String Theory and the lesser know Quantum Loop Gravity. The former would be considered a fundamental theory that describes all forces in nature while the latter limits itself to the description of gravity.

Apart from the incompatibility of QFT and GR there are still several unsolved problems in particle physics like the nature of dark matter and dark energy or the origin of neutrino masses. While these phenomena tell us that the current Standard Model of particle physics is incomplete they might still be explainable within the current frameworks of QFT and GR. Of course, a fundamental theory also has to come up with a natural explanation for these outstanding issues.

Stephen Wolfram is best known for his work in computer science but he actually started his career in physics. He received his PhD in theoretical particle physics at the age of 20 and was the youngest person in history to receive the prestigious McArthur grant. However, he soon left physics to pursue his research into cellular automata which lead to the development of the Wolfram code. After founding his company Wolfram Research he continued to develop the Wolfram computational language which is the basis for the Wolfram Mathematica software. On the one hand, it becomes obvious that Wolfram is a very gifted man, on the other hand, people have sometimes criticized him for being an egomaniac as his brand naming convention subtly suggests.

In 2002, Stephen Wolfram published his 1200-page mammoth book A New Kind of Sciencewhere he applied his research on cellular automata to physics. The main thesis of the book is that simple programs, in particular the Rule 110 cellular automaton, can generate very complex systems through repetitive application of a simple rule. It further claims that these systems can describe all of the physical world and that the Universe itself is computational. The book got controversial reviews, while some found that it contains a cornucopia of ideas others criticized it as arrogant and overstated. Among the most famous critics were Ray Kurzweil and Nobel laureate Steven Weinberg. It was the latter who wrote that:

Wolfram [] cant resist trying to apply his experience with digital computer programs to the laws of nature. [] he concludes that the universe itself would then be an automaton, like a giant computer. Its possible, but I cant see any motivation for these speculations, except that this is the sort of system that Wolfram and others have become used to in their work on computers. So might a carpenter, looking at the moon, suppose that it is made of wood.

The Wolfram Physics Project is a continuation of the ideas formulated in A New Kind of Science and was born out of a collaboration with two young physicists who attended Wolframs summer school. The main idea has not changed, i.e. that the Universe in all its complexity can be described through a computer algorithm that works by iteratively applying a simple rule. Wolfram recognizes that cellular automata may have been too simple to produce this kind of complexity instead he now focuses on hypergraphs.

In mathematics, a graph consists of a set of elements that are related in pairs. When the order of the elements is taken into account this is called a directed graph. The most simple example of a (directed) graph can be represented as a diagram and one can then apply a rule to this graph as follows:

The rule states that wherever a relation that matches {x,y} appears, it should be replaced by {{x ,y},{y,z}}, wherez is a new element. Applying this rule to the graph yields:

By applying this rule iteratively one ends up with more and more complicated graphs as shown in the example here. One can also add complexity by allowing self-loops, rules involving copies of the same relation, or rules depending on multiple relations. When allowing relations between more than two elements, this moves from graphs to hypergraphs.

How is this related to physics? Wolfram surmises that the Universe can be represented by an evolving hypergraph where a position in space is defined by a node and time basically corresponds to the progressive updates. This introduces new physical concepts, e.g. that space and time are discrete, rather than continuous. In this model, the quest for a fundamental theory corresponds to finding the right initial condition and underlying rule. Wolfram and his colleagues think they have already identified the right class of rules and constructed models that reproduce some basic principles of general relativity and quantum mechanics.

A fundamental problem of the model is what Wolfram calls computational irreducibility, meaning that to calculate any state of the hypergraph one has to go through all iterations starting from the initial condition. This would make it virtually impossible to run the computation long enough in order to test a model by comparing it to our current physical Universe.

Wolfram thinks that some basic principles, e.g. the dimensionality of space, can be deduced from the rules itself. Wolfram also points out that although the generated model universes can be tested against observations the framework itself is not amenable to experimental falsification. It is generally true that fundamental physics has long decoupled from the scientific method of postulating hypotheses based on experimental observations. String theory has also been criticized for not making any testable predictions. However, String theory historically developed from nuclear physics while Wolfram does not give any motivation for choosing evolving hypergraphs for his framework. However, some physicists are thinking in similar directions like Nobel laureate Gerard tHooft who has recently published a cellular automaton interpretation of quantum mechanics. In addition, Wolframs colleague, Jonathan Gorard, points out that their approach is a generalization of spin networks used in Loop Quantum Gravity.

On his website, Wolfram invites other people to participate in the project although it is somehow vague how this will work. In general, they need people to work out the potential observable predictions of their model and the relation to other fundamental theories. If you want to dive into the topic in depth there is a 448-page technical introduction on the website and they have also recently started a series of livestreams where they plan to release 400 hours of video material.

Wolframs model certainly contains many valuable ideas and cannot be simply disregarded as crackpottery. Still, most mainstream physicists will probably be skeptical about the general idea of a discrete computational Universe. The fact that Wolfram tends to overstate his findings and publishes through his own media channels instead of going through peer-reviewed physics journals does not earn him any extra credibility.

Go here to see the original:

Wolfram Physics Project Seeks Theory Of Everything; Is It Revelation Or Overstatement? - Hackaday

New Theory of Everything Unites Quantum Mechanics with Relativity … and Much More – Discover Magazine

One of the goals of modern physics is to determine the underlying rules that govern our reality. Indeed, one of the wonders of the universe is that just a few rules seem to describe many aspects of our world. Whats more, scientists have found ways to combine these rules into simpler, more powerful ones.

That has tempted many thinkers to suggest there might be a single rule, or set of rules, from which all else emerges. This pursuit of a theory of everything has driven much of the thinking behind modern physics. We have built multibillion-dollar machines and observatories to test these ideas, generally with huge success.

Despite this success, one outstanding challenge is to unite two entirely different but fundamental pillars of modern science: the theory of relativity, which describes the universe on a large scale; and the theory of quantum mechanics, which describes it on the smallest scale.

Both theories almost perfectly explain the results of almost every experiment ever performed. And yet they are entirely at odds with each other. Numerous theorists have attempted a unification, but progress has been slow.

That sets the scene for the work of Stephen Wolfram, a physicist and computer scientist who has spent much of his career categorizing simple algorithms, called cellular automatons, and studying their properties. His main finding is that the simplest algorithms can produce huge complexity; some even generate randomness. And his main hypothesis is that the universe is governed by some subset of these algorithms.

In 2002, he published his results in a weighty tome called A New Kind of Science, which garnered mixed reviews and generally failed to make the impact Wolfram seemingly hoped for. Now hes back with another, similar idea and an even more ambitious claim.

Once again, Wolfram has studied the properties of simple algorithms; this time ones that are a little different to cellular automatons, but which he says are as minimal and structureless as possible. And, once again, he says that applying these simple algorithms repeatedly leads to models toy universes, if you like of huge complexity. But his new sensational claim is that the laws of physics emerge from this complexity, that they are an emergent property of these toy universes.

Wolfram, who works with a couple of collaborators, describes how relativity and space-time curvature are an emergent property in these universes. He then describes how quantum mechanics is an emergent property of these same universes, when they are studied in a different way. By this way of thinking, relativity and quantum mechanics are different sides of the same coin. He goes on to show how they are intimately connected with another, increasingly influential and important idea in modern physics: computational complexity.

So his new theory of everything is that three pillars of modern physics relativity, quantum mechanics and computational complexity are essentially the same thing viewed in different ways. At this point I am certain that the basic framework we have is telling us fundamentally how physics works, says Wolfram. Its a jaw-dropping claim.

The first thing to acknowledge is that it is hard to develop any coherent theory that unites relativity with quantum mechanics. If it passes muster under peer review, it will be a tremendous achievement.

But there are also reasons to be cautious. First, it is not clear that Wolfram is submitting the work for formal peer review. If not, why not?

Second, the measure of any new theory is the testable predictions it makes that distinguish it from other theories. Numerous interesting ideas have fallen by the wayside because their predictions are the same as conventional or better-known theories.

Wolfram certainly says his approach leads to new predictions. Weve already got some good hints of bizarre new things that might be out there to look for, he says.

But whether they are testable is another matter, since he leaves out the details of how this could be done. For example, his theory suggests there is an elementary length in the universe of about 10^-93 meters, which is much smaller than the Planck length 10^-35 m, currently thought of as the smallest possible length.

Wolfram says this implies that the radius of an electron is about 10^-81 m. The current experimental evidence is that the radius is less than 10^-22 m.

His theory also predicts that mass is quantized into units about 10^36 times smaller than the mass of an electron.

Another prediction is that particles like electrons are not elementary at all, but conglomerations of much simpler elements. By his calculations, an electron should be composed of about 10^35 of these elements.

But much simpler particles made of fewer elements should exist, too. He calls these oligons and because they ought to exert a gravitational force, Wolfram suggests they make up the dark matter that astronomers think fills our universe but cant see.

Just how physicists can test these predictions isnt clear. But perhaps its unfair to expect that level of detail at such an early stage. (Wolfram said he started working in earnest on this idea only in October of last year.)

One final point worth noting is Wolframs place in the physics community. He is an outsider. That shouldnt matter, but it does.

A persistent criticism of A New Kind of Science was that it failed to adequately acknowledge the contributions of others working in the same field. This impression undoubtedly had a detrimental effect on the way Wolframs ideas were received and how they have spread.

Will things be different this time? Much will depend on his interactions with the community. Formal peer review would be a good start. Wolfram has made some effort to acknowledge useful discussions he has had with other physicists, and he includes a long list of references (although roughly a quarter are to his own work or to his company, Wolfram Research). In particular, Wolfram acknowledges the work of Roger Penrose on combinatorial space-time in the early 1970s, which anticipated Wolframs approach.

Like it or not, science is a social endeavor. Ideas spread through a network whose nodes are people. And if youre not part of the community and actively flout its norms, then it should not be a surprise if your work is ignored or that collaborations do not flourish or that funding is hard to come by. And while theoretical work like Wolframs can flourish with minimal funding, experimental work cannot.

Wolframs work would certainly benefit from broad collaboration and development. Whether he will get it is in large part up to him.

Ref: A Class of Models with the Potential to Represent Fundamental Physics arxiv.org/abs/2004.08210For an informal introduction: Finally We May Have a Path to the Fundamental Theory of Physics and Its Beautiful

Visit link:

New Theory of Everything Unites Quantum Mechanics with Relativity ... and Much More - Discover Magazine

Creator of Wolfram Alpha Has a Bold Plan to Find a New Fundamental Theory of Physics – ScienceAlert

Stephen Wolfram is a cult figure in programming and mathematics. He is the brains behind Wolfram Alpha, a website that tries to answer questions by using algorithms to sift through a massive database of information. He is also responsible for Mathematica, a computer system used by scientists the world over.

Last week, Wolfram launched a new venture: the Wolfram Physics Project, an ambitious attempt to develop a new physics of our Universe.

The new physics, he declares, is computational. The guiding idea is that everything can be boiled down to the application of simple rules to fundamental building blocks.

Why do we need such a theory? After all, we already have two extraordinarily successful physical theories.

These are general relativity a theory of gravity and the large-scale structure of the Universe and quantum mechanics a theory of the basic constituents of matter, sub-atomic particles, and their interactions. Haven't we got physics licked?

Not quite. While we have an excellent theory of how gravity works for large objects, such as stars and planets and even people, we don't understand gravity at extremely high energies or for extremely small things.

General relativity "breaks down" when we try to extend it into the miniature realm where quantum mechanics rules. This has led to a quest for the holy grail of physics: a theory of quantum gravity, which would combine what we know from general relativity with what we know from quantum mechanics to produce an entirely new physical theory.

The current best approach we have to quantum gravity is string theory. This theory has been a work in progress for 50 years or so, and while it has achieved some success there is a growing dissatisfaction with it as an approach.

Wolfram is attempting to provide an alternative to string theory. He does so via a branch of mathematics called graph theory, which studies groups of points or nodes connected by lines or edges.

Think of a social networking platform. Start with one person: Betty. Next, add a simple rule: every person adds three friends. Apply the rule to Betty: now she has three friends. Apply the rule again to every person (including the one you started with, namely: Betty). Keep applying the rule and, pretty soon, the network of friends forms a complex graph.

A simple rule multiple times creates a complex network of points and connections. (Author provided)

Wolfram's proposal is that the universe can be modelled in much the same way. The goal of physics, he suggests, is to work out the rules that the universal graph obeys.

Key to his suggestion is that a suitably complicated graph looks like a geometry. For instance, imagine a cube and a graph that resembles it.

(Author provided)

Above:In the same way that a collection of points and lines can approximate a solid cube, Wolfram argues that space itself may be a mesh that knits together a series of nodes.

Wolfram argues that extremely complex graphs resemble surfaces and volumes: add enough nodes and connect them with enough lines and you form a kind of mesh. He maintains that space itself can be thought of as a mesh that knits together a series of nodes in this fashion.

How can complicated meshes of nodes help with the project of reconciling general relativity and quantum mechanics? Well, quantum theory deals with discrete objects with discrete properties. General relativity, on the other hand, treats the universe as a continuum and gravity as a continuous force.

If we can build a theory that can do what general relativity does but that starts from discrete structures like graphs, then the prospects for reconciling general relativity and quantum mechanics start to look more promising.

If we can build a geometry that resembles the one given to us by general relativity using a discrete structure, then the prospects look even better.

Space may be a complex mesh of points connected by a simple rule that is iterated many times. (Wolfram Physics Project)

While Wolfram's project is promising, it does contain more than a hint of hubris. Wolfram is going up against the Einsteins and Hawkings of the world, and he's doing it without a life spent publishing in physics journals.

(He did publish several physics papers as a teenage prodigy, but that was 40 years ago, as well as a book A New Kind of Science, which is the spiritual predecessor of the Wolfram Physics Project.)

Moreover, his approach is not wholly original. It is similar to two existing approaches to quantum gravity: causal set theory and loop quantum gravity, neither of which get much of a mention in Wolfram's grand designs.

Nonetheless, the project is notable for three reasons.

First, Wolfram has a broad audience and he will do a lot to popularise the approach that he advocates. Proponents of loop quantum gravity in particular lament the predominance of string theory within the physics community. Wolfram may help to underwrite a paradigm shift in physics.

Second, Wolfram provides a very careful overview of the project from the basic principles of graph theory up to general relativity. This will make it easier for individuals to get up to speed with the general approach and potentially make contributions of their own.

Third, the project is "open source", inviting contributions from citizen scientists.

If nothing else, this gives us all something to do at the moment in between baking sourdough and playing Animal Crossing, that is.

Sam Baron, Associate professor, Australian Catholic University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read the original:

Creator of Wolfram Alpha Has a Bold Plan to Find a New Fundamental Theory of Physics - ScienceAlert