Monthly Archives: February 2022

Black hole asymmetry puts quantum gravity to the test – Advanced Science News

Posted: February 1, 2022 at 2:25 am

Physicists hope to detect asymmetry in spinning black holes using NASA's LISA telescope to finally provide proof of quantum gravity.

Detecting gravitational waves using Earth-based observatories has become a powerful tool for studying the properties of black holes.

Recently, a group of theoretical physicists has analyzed the process of gravitational wave emission and showed that the proposed space-based gravitational wave detector, LISA, will have enough sensitivity not only to detect more black hole mergers, but to measure a feature of spinning black holes an asymmetry between their northern and southern hemispheres in particular that will help elucidate a longstanding mystery.

The asymmetry is imprinted in a precise waveform of the gravitational radiation and is absent in the theory of general relativity, the most accepted theory of gravity, but is predicted by its quantum extension.

Quantum mechanics has been successfully used to study and explain the behavior of atoms, nuclei, and subatomic particles, while general relativity, which describes gravity as a curvature of spacetime, explains and predicts the dynamics of stars, galaxies, and the universe as a whole.

The unification of quantum mechanics and general relativity has been a formidable task made difficult by the fact that the usual rules of quantization required to convert a classical theory into a quantum one dont appear to work for gravity.

Since the typical scale of physical systems described by these two theories differ by many orders of magnitude, there is no need to use both simultaneously to describe a given event or behavior. But sometimes this is necessary. For example, a quantum mechanical description of gravity is needed to understand what happens in the vicinity of black holes surfaces and centers, as well as describe the behavior of the universe during the first moments of its life.

So far, theoretical physicists have proposed a number of theories for quantum gravity, but experimental studies are extremely complicated. The problem being that the typical energy scale at which the quantum gravity effects become important in the elementary particles interactions which scientists usually study to understand the fundamental physics are many orders of magnitude higher than the energies that can be achieved in colliders. For this reason, researchers have been seeking another way to explore it.

The most promising systems in which these effects are possible to measure are black holes, and physicists have already begun probing quantum effects in their physics by studying their membranes, which should exist above a black holes surface according to some theories of quantum gravity.

This is done by analyzing gravitational waves emitted during black hole merger events, which physicists currently detect with the Earth-based gravitational wave observatories LIGO and Virgo. Gravitational waves, which were first theorized by Albert Einstein, are ripples in the fabric of spacetime, and black hole mergers are so important in this realm because they generate the most powerful gravitational radiation in the universe.

Recently, two scientists from the Institute for Theoretical Physics in KU Leuven have proposed that studying these waves can also be used to measure the asymmetry between the northern and southern hemispheres of spinning black holes, giving us another chance to study quantum gravity with gravitational waves.

The best way to identify this, the researchers say, is to analyze the amplitude and spectrum of gravitational waves emitted when a relatively light black hole with a mass just a few times larger than the mass of our Sun is consumed by a supermassive neighbor a process that physicists call extreme mass-ration inspirals. The gravitational radiation generated in such events should carry an imprint of the aforementioned black hole asymmetry.

After analyzing gravitational waves from said event, the research team, unfortunately, had to conclude that the change in the gravitational wave signal due to the black hole asymmetry is too small to be detectable by currently operational gravitational wave detectors on Earth.

This problem could be solved with the launch of NASAs space-based gravitational wave observatory, LISA, which will have a much greater sensitivity to tiny changes in the spacetime geometry caused by passing gravitational waves.

LISA consists of three spacecrafts arranged in an equilateral triangle with sides that are 2.5 million km long (in comparison to 4 km arm lengths of LIGO), and moves along an Earth-like orbit around the Sun. When spacetime is distorted by celestial bodies found in our Solar System, the distances between the spacecraft stay the same. But when a gravitational wave passes through the LISA orbit, it leads to small oscillations in the triangle side lengths.

In order to detect these oscillations, laser beams are set up to travel between each spacecraft. When the distances along which the light beams travel change, a pattern in the combined beam signal changes as well, signaling the detection of the wave. The amplitude of the distance change between the spacecraft is proportional to the distance itself. The giant size of the space observatory will allow it to detect very small oscillations in the spacetime geometry, making it extremely sensitive. To put into perspective, LISA is expected to be able to measure relative shifts in the position of each spacecraft at distances that are less than the diameter of a hydrogen atom.

Its launch scheduled for the mid-2030s, and hopefully, it will allow us to put our theories of quantum gravity to the test.

Reference: (preprint) Kwinten Fransen, et al., On Detecting Equatorial Symmetry Breaking with LISA, arXiv:2201.03569

See more here:

Black hole asymmetry puts quantum gravity to the test - Advanced Science News

Posted in Quantum Physics | Comments Off on Black hole asymmetry puts quantum gravity to the test – Advanced Science News

Analysis of the effects of nonextensivity for a generalized dissipative system in the SU(1,1) coherent states | Scientific Reports – Nature.com

Posted: at 2:25 am

Basics of the general CK oscillator with nonextensivity

Various physical systems subjected to a friction-like force which is a linear function of velocity can be modeled by the formal CK oscillator. The Hamiltonian of the CK oscillator is given by34,35

$$begin{aligned} {hat{H}} = e^{-gamma t} frac{{hat{p}}^2}{2m} + frac{1}{2} e^{gamma t} m omega ^2 {hat{x}}^2, end{aligned}$$

(1)

where (gamma) is a damping constant. This Hamiltonian can be generalized by replacing the ordinary exponential function with a deformed one that is defined by1,37

$$begin{aligned} exp _q {(y)} = [1+(1-q)y]^{1/(1-q)}, end{aligned}$$

(2)

with an auxiliary condition

$$begin{aligned} 1+(1-q)y ge 0, end{aligned}$$

(3)

where q is a parameter indicating the degree of nonextensivity. This generalized function is known as the q-exponential and has its own merit in describing non-idealized dynamical systems. The characteristic behavior of the q-exponential function is shown in Fig.1. In the field of thermostatistics, a generalization of the Gaussian distribution through the q-exponential is known as the Tsallis distribution that is well fitted to many physical systems of which behavior does not follow the usual BG statistical mechanics38.

q-exponential function for several different values of q.

In terms of Eq.(2), we can express the generalized CK Hamiltonian in the form1

$$begin{aligned} {hat{H}}_q = frac{{hat{p}}^2}{2m exp _q{(gamma t)}} + frac{1}{2} exp _q{(gamma t)} m omega ^2 {hat{x}}^2. end{aligned}$$

(4)

This Hamiltonian is Hermitian and, in the case of (q rightarrow 1), it recovers to the ordinary CK one that is given in Eq.(1). From the use of the Hamiltons equations in one dimension, we can derive the classical equation of motion that corresponds to Eq.(4) as

$$begin{aligned} ddot{x} + frac{gamma }{1+(1-q)gamma t}{dot{x}} + omega ^2 x = 0. end{aligned}$$

(5)

In an extreme case where (q rightarrow 0), Eq.(2) reduces to a linear function (1+y). Along with this, Eq.(5) reduces to

$$begin{aligned} ddot{x} + frac{gamma }{1+gamma t}{dot{x}} + omega ^2 x = 0. end{aligned}$$

(6)

If we think from the pure mathematical point of view, it is also possible to consider even the case that q is smaller than zero based on the condition given in Eq.(3). However, in most actual nonextensive systems along this line, the value of q may not deviate too much from unity which is its standard value. So we will restrain to treating such extreme cases throughout this research.

In general, for time-dependent Hamiltonian systems, the energy operator is not always the same as the given Hamiltonian. The role of the Hamiltonian in this case is restricted: It plays only the role of a generator for the related classical equation of motion. From fundamental Hamiltonian dynamics, we can see that the energy operator of the generalized damped harmonic oscillator is given by26,39

$$begin{aligned} {hat{E}}_{q} = {hat{H}}_q/exp _q{(gamma t)}. end{aligned}$$

(7)

Let us denote two linearly independent homogeneous real solutions of Eq.(5) as (s_1(t)) and (s_2(t)). Then, from a minor mathematical evaluation, we have40,41

$$begin{aligned} s_1(t)= & {} {s}_{0,1}sqrt{frac{pi omega }{2gamma (1-q)}} [exp _q{(gamma t)}]^{-q/2} J_nu left( frac{omega }{(1-q)gamma } + omega t right) , end{aligned}$$

(8)

$$begin{aligned} s_2(t)= & {} {s}_{0,2}sqrt{frac{pi omega }{2gamma (1-q)}} [exp _q{(gamma t)}]^{-q/2} N_nu left( frac{omega }{(1-q)gamma } + omega t right) , end{aligned}$$

(9)

where (J_nu) and (N_nu) are the Bessel functions of the first and second kind, ({s}_{0,1}) and ({s}_{0,2}) are constants which have dimension of position, and (nu = {q}/{[2(1-q)]}). From Fig.2, we see that the phases in the time evolutions of (s_1(t)) and (s_2(t)) are different depending on the value of q. Now we can represent the general solution of Eq.(5) in the form

$$begin{aligned} x(t) = c_1 s_1(t) + c_2 s_2(t), end{aligned}$$

(10)

where (c_1) and (c_2) are arbitrary real constants.

Time evolution of (s_1(t)) (A) and (s_2(t)) (B) for several different values of q. We used (omega =1), (gamma =0.1), and (s_{0,1}=s_{0,2}=0.1).

We introduce another time function s(t) that will be used later as

$$begin{aligned} s(t) = sqrt{s_1^2(t)+s_2^2(t)}. end{aligned}$$

(11)

This satisfies the differential equation42

$$begin{aligned} ddot{s}(t) + frac{gamma }{1+(1-q)gamma t}{dot{s}}(t) + omega ^2 s(t) - frac{Omega ^2}{[mexp _q{(gamma t)}]^2} frac{1}{s^3(t)} = 0, end{aligned}$$

(12)

where (Omega) is a time-constant which is of the form

$$begin{aligned} Omega = m exp _q{(gamma t)} [s_1 {dot{s}}_2 - {dot{s}}_1 s_2 ]. end{aligned}$$

(13)

By differentiating Eq.(13) with respect to time directly, we can readily confirm that (Omega) does not vary in time.

In accordance with the invariant operator theory, the invariant operator must satisfy the Liouville-von Neumann equation which is

$$begin{aligned} frac{d {hat{I}}}{d t} = frac{partial {hat{I}}}{partial t} + frac{1}{ihbar } [{hat{I}},{hat{H}}_q] = 0. end{aligned}$$

(14)

A straightforward evaluation after substituting Eq.(4) into the above equation leads to24,40

$$begin{aligned} {hat{I}} = hbar Omega left( {hat{b}}^dagger {hat{b}} + frac{1}{2}right) , end{aligned}$$

(15)

where ({hat{b}}) is a destruction operator defined as

$$begin{aligned} {hat{b}} = sqrt{frac{1}{2hbar Omega }} left[ left( frac{Omega }{s(t)} -i m exp _q{(gamma t)} {dot{s}}(t) right) {hat{x}} + i s(t) {hat{p}} right] , end{aligned}$$

(16)

whereas its hermitian adjoint ({hat{b}}^dagger) is a creation operator. If we take the limit (gamma rightarrow 0), Eq.(16) reduces to that of the simple harmonic oscillator. One can easily check that the boson commutation relation for ladder operators holds in this case: ([{hat{b}},{hat{b}}^dagger ]=1). This consequence enables us to derive the eigenstates of ({hat{I}}) in a conventional way.

The zero-point eigenstate (| 0 rangle) is obtained from ({hat{b}}| 0 rangle =0). The excited eigenstates (| n rangle) are also evaluated by acting ({hat{b}}^dagger) into (| 0 rangle) n times. The Fock state wave functions (| psi _n rangle) that satisfy the Schrdinger equation are different from the eigenstates of ({hat{I}}) by only minor phase factors which can be obtained from basic relations24. However, we are interested in the SU(1,1) coherent states rather than the Fock states in the present work.

The SU(1,1) generators are defined in terms of ladder operators, such that

$$begin{aligned} hat{{mathcal {K}}}_0= & {} frac{1}{2} left( {hat{b}}^dagger {hat{b}} + frac{1}{2}right) , end{aligned}$$

(17)

$$begin{aligned} hat{{mathcal {K}}}_+= & {} frac{1}{2} ({hat{b}}^dagger )^2, end{aligned}$$

(18)

$$begin{aligned} hat{{mathcal {K}}}_-= & {} frac{1}{2} {hat{b}}^2. end{aligned}$$

(19)

From the inverse representation of Eq.(16) together with its hermitian adjoint ({hat{b}}^dagger), we can express ({hat{x}}) and ({hat{p}}) in terms of ({hat{b}}) and ({hat{b}}^dagger). By combining the resultant expressions with Eqs.(17)(19), we can also represent the canonical variables in terms of SU(1,1) generators as

$$begin{aligned} {hat{x}}^2= & {} frac{hbar s^2}{Omega } (2hat{{mathcal {K}}}_0 + hat{{mathcal {K}}}_+ + hat{{mathcal {K}}}_-), end{aligned}$$

(20)

$$begin{aligned} {hat{p}}^2= & {} frac{hbar }{s^2} Bigg [ 2 left( Omega + frac{[mexp _q(gamma t)]^2}{ Omega } s^2{dot{s}}^2 right) hat{{mathcal {K}}}_0 -left( sqrt{Omega } - frac{imexp _q(gamma t)}{ sqrt{Omega }} s{dot{s}} right) ^2 hat{{mathcal {K}}}_+ nonumber \&-left( sqrt{Omega } + frac{imexp _q(gamma t)}{ sqrt{Omega }}s{dot{s}} right) ^2 hat{{mathcal {K}}}_- Bigg ]. end{aligned}$$

(21)

The substitution of the above equations into Eq.(4) leads to

$$begin{aligned} {hat{H}}_q = delta _0(t) hat{{mathcal {K}}}_0 + delta (t) hat{{mathcal {K}}}_+ + delta ^*(t) hat{{mathcal {K}}}_- , end{aligned}$$

(22)

where

$$begin{aligned} delta _0(t)= & {} frac{hbar }{s^2} left( frac{Omega }{mexp _q{(gamma t)}} + frac{1}{Omega } mexp _q{(gamma t)} s^2 {dot{s}}^2 right) + frac{hbar }{Omega } mexp _q{(gamma t)}omega ^2 s^2 , end{aligned}$$

(23)

$$begin{aligned} delta (t)= & {} - frac{hbar }{2 mexp _q{(gamma t)} s^2} left( sqrt{Omega } - i frac{mexp _q{(gamma t)}s{dot{s}}}{sqrt{Omega }} right) ^2 + frac{hbar }{2Omega } mexp _q{(gamma t)} omega ^2 s^2 . end{aligned}$$

(24)

In accordance with Gerrys work (see Ref. 43), Eq.(22) belongs to a class of general Hamiltonian that preserves an arbitrary initial coherent state. In the next section, we will analyze the properties of nonextensivity associated with the SU(1,1) coherent states using the Hamiltonian in Eq.(22).

The SU(1,1) coherent states for the quantum harmonic oscillator belong to a dynamical group whose description is based on SU(1,1) Lie algebraic formulation. The analytical representation of the SU(1,1) coherent states provides a natural description of quantum and classical correspondence which has an important meaning in theoretical physics. On the experimental side, optical interferometers like radio interferometers that use four-wave mixers as a protocol for improving measurement accuracy are characterized through the SU(1,1) Lie algebra44,45.

According to the development of Perelomov46, the SU(1,1) coherent states are defined by

$$begin{aligned} | {tilde{xi }};k rangle = hat{{mathcal {D}}}(beta )|{{{tilde{0}}}} rangle _k , end{aligned}$$

(25)

where (hat{{mathcal {D}}}(beta )) is the displacement operator, (|{{{tilde{0}}}} rangle _k) is the vacuum state in the damped harmonic oscillator, and k is the Bargmann index of which allowed values are 1/4 and 3/4. The basis for the unitary space is a set of even boson number for (k=1/4), whereas it is a set of odd boson number for (k=3/4). Here, the displacement operator is given by

$$begin{aligned} {hat{D}}(beta )= & {} exp left[ frac{1}{2} (beta ^2 hat{{mathcal {K}}}_+ - beta ^{*2} hat{{mathcal {K}}}_-) right] nonumber \= & {} e^{{tilde{xi }} hat{{mathcal {K}}}_+} exp {-2ln [cosh (|beta |^2/2)] hat{{mathcal {K}}}_0} e^{-{tilde{xi }}^* hat{{mathcal {K}}}_-}, end{aligned}$$

(26)

where (beta) is the eigenvalue of ({hat{b}}) and ({tilde{xi }}) is an SU(1,1) coherent state parameter of the form

$$begin{aligned} {tilde{xi }} = frac{beta ^2}{|beta |^2} tanh (|beta |^2/2). end{aligned}$$

(27)

The above equation means that (|{tilde{xi }}| <1). For (k=3/4) among the two allowed values, the resolution of the identity in Hilbert space is given by47

$$begin{aligned} int dmu ({tilde{xi }};k) | {tilde{xi }} ; k rangle langle {tilde{xi }} ; k| = mathbf{1}, end{aligned}$$

(28)

where (dmu ({tilde{xi }};k)=[(2k-1)/pi ] d^2 {tilde{xi }} /(1-|{tilde{xi }}|^2)^2). More generally speaking, this resolution is valid for (k>1/2). For a general case where k is an arbitrary value, the exact resolution is unknown. Brif et al., on one hand, proposed a resolution of the identity with a weak concept in this context, which can be applicable to both cases of (k>1/2) and (k<1/2)47. In what follows, various characteristics of the damped harmonic oscillator with and without deformation in quantum physics, such as quantum correlation, phase coherence, and squeezing effect, can be explained by means of the SU(1,1) Lie algebra and the coherent states associated with this algebra48,49.

The expectation values of SU(1,1) generators in the states (| {tilde{xi }};k rangle) are50

$$begin{aligned} langle {tilde{xi }} ;k | hat{{mathcal {K}}}_0|{tilde{xi }};krangle= & {} k frac{1+|{tilde{xi }}|^2}{1-|{tilde{xi }}|^2 } , end{aligned}$$

(29)

$$begin{aligned} langle {tilde{xi }} ;k | hat{{mathcal {K}}}_+|{tilde{xi }};krangle= & {} frac{2k{tilde{xi }}^*}{1-|{tilde{xi }}|^2 } , end{aligned}$$

(30)

$$begin{aligned} langle {tilde{xi }} ;k | hat{{mathcal {K}}}_-|{tilde{xi }};krangle= & {} frac{2k{tilde{xi }}}{1-|{tilde{xi }}|^2 }. end{aligned}$$

(31)

Using the above equations, the expectation values of the Hamiltonian given in Eq.(22) are easily identified as50,51

$$begin{aligned} {{mathcal {H}}}_{q,k}= & {} langle {tilde{xi }} ;k | {hat{H}}_q |{tilde{xi }};k rangle nonumber \= & {} frac{k}{1-|{tilde{xi }}|^2} { delta _0(t)(1+|{tilde{xi }}|^2) +2 [delta (t){tilde{xi }}^*+delta ^*(t){tilde{xi }}] } . end{aligned}$$

Go here to see the original:

Analysis of the effects of nonextensivity for a generalized dissipative system in the SU(1,1) coherent states | Scientific Reports - Nature.com

Posted in Quantum Physics | Comments Off on Analysis of the effects of nonextensivity for a generalized dissipative system in the SU(1,1) coherent states | Scientific Reports – Nature.com

Two Rochester researchers named AAAS fellows : NewsCenter – University of Rochester

Posted: at 2:25 am

January 26, 2022

Two University of Rochester faculty members have been elected fellows of the American Association for the Advancement of Science (AAAS). Nicholas Bigelow, the Lee A. DuBridge Professor of Physics and a professor of optics, and Michael Scott, the Arthur Gould Yates Professor of Engineering and also a professor in and chair of the computer science department, are among 564 members of the association recognized this year for their scientifically or socially distinguished efforts on behalf of the advancement of science or its applications.

Bigelow has helped advance the understanding of quantum physics and quantum optics through his pioneering research on the interactions between light and matter. His lab uses laser light to cool atoms to nearly absolute zero temperatures to better manipulate and study them.

Bigelows current projects include creating and manipulating Bose-Einstein condensatesa quantum state of matter made from an atomic gas cooled to temperatures close to absolute zeroand investigating the quantum nature of atom-photon interactions. This research has important applications in areas of quantum mechanics such as quantum computing and sensing. He is also director of the NASA-funded Consortium for Ultracold Atoms in Space and the principal investigator of cold atom experiments running aboard the International Space Station.

Bigelow joined the faculty of the University of Rochester in 1992 and served as chair of the Department of Physics and Astronomy from 2008 to 2014.

He has twice received the Universitys Society of Physics Students Award for Excellence in Undergraduate Teaching (in 1998 and 2006) and has held various positions in University governance and leadership, including serving as chair of the Board on Academic Honesty for the College from 1998 to 2004, chair of the University of Rochester Presidential Search Committee in 2004, cochair of the Universitys Middle States Accreditation Committee, and chair of the Faculty Senate.

Bigelow is a fellow of the American Physical Society and of Optica (formerly OSA, or the Optical Society of America).

Scotts widely cited research focuses primarily on systems software for parallel and distributed computing, including developing new ways to share data among concurrent activities, to automate its movement and placement, and to protect it from accidental loss or corruption.

He is best known as a cocreator of the MCS mutual exclusion lock and as the author of Programming Language Pragmatics, one of the definitive and most widely used textbooks on programming language design and implementation. Several algorithms from Scotts research group have been incorporated into the standard library of the Java programming language.

He is a fellow of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). In 2006, he shared the Edsger W. Dijkstra Prize in Distributed Computing.

Scott, who joined the faculty in 1985, also chaired the Department of Computer Science from 1996 to 1999, and was interim chair for six months in 2007, and again in 2017. He received the Universitys Robert and Pamela Goergen Award for Distinguished Achievement and Artistry in Undergraduate Teaching in 2001, the William H. Riker Award for Graduate Teaching in 2020, and the Lifetime Achievement Award from the Hajim School of Engineering & Applied Sciences in 2018.

He has played an active role in University governance, including serving as cochair of the Faculty Advisory Committee for the presidential search in 2018.

Ultimate vacuum chamber creates nothing

Nicholas Bigelows lab conducts experiments using a box of nothing, an ultimate vacuum chamber that allows researchers to interact with and manipulate atoms. But is nothing ever possible? How have scientists, philosophers, and mathematicians thought about the concept of nothing throughout history and up to the present?

Tags: Arts and Sciences, award, Department of Computer Science, Department of Physics and Astronomy, Hajim School of Engineering and Applied Sciences, Institute of Optics, Michael Scott, Nicholas Bigelow

Category: Science & Technology

Read the original post:

Two Rochester researchers named AAAS fellows : NewsCenter - University of Rochester

Posted in Quantum Physics | Comments Off on Two Rochester researchers named AAAS fellows : NewsCenter – University of Rochester

Why the Classical Argument Against Free Will Is a Failure – The MIT Press Reader

Posted: at 2:25 am

Despite bold philosophical and scientific claims, theres still no good reason to doubt the existence of free will.

In the last several years, a number of prominent scientists have claimed that we have good scientific reason to believe that theres no such thing as free will that free will is an illusion. If this were true, it would be less than splendid. And it would be surprising, too, because it really seems like we have free will. It seems that what we do from moment to moment is determined by conscious decisions that we freely make.

We need to look very closely at the arguments that these scientists are putting forward to determine whether they really give us good reason to abandon our belief in free will. But before we do that, it would behoove us to have a look at a much older argument against free will an argument thats been around for centuries.

The older argument against free will is based on the assumption that determinism is true. Determinism is the view that every physical event is completely caused by prior events together with the laws of nature. Or, to put the point differently, its the view that every event has a cause that makes it happen in the one and only way that it could have happened.

If determinism is true, then as soon as the Big Bang took place 13 billion years ago, the entire history of the universe was already settled. Every event thats ever occurred was already predetermined before it occurred. And this includes human decisions. If determinism is true, then everything youve ever done every choice youve ever made was already predetermined before our solar system even existed. And if this is true, then it has obvious implications for free will.

Suppose that youre in an ice cream parlor, waiting in line, trying to decide whether to order chocolate or vanilla ice cream. And suppose that when you get to the front of the line, you decide to order chocolate. Was this choice a product of your free will? Well, if determinism is true, then your choice was completely caused by prior events. The immediate causes of the decision were neural events that occurred in your brain just prior to your choice. But, of course, if determinism is true, then those neural events that caused your decision had physical causes as well; they were caused by even earlier events events that occurred just before they did. And so on, stretching back into the past. We can follow this back to when you were a baby, to the very first events of your life. In fact, we can keep going back before that, because if determinism is true, then those first events were also caused by prior events. We can keep going back to events that occurred before you were even conceived, to events involving your mother and father and a bottle of Chianti.

If determinism is true, then as soon as the Big Bang took place 13 billion years ago, the entire history of the universe was already settled.

So if determinism is true, then it was already settled before you were born that you were going to order chocolate ice cream when you got to the front of the line. And, of course, the same can be said about all of our decisions, and it seems to follow from this that human beings do not have free will.

Lets call this the classical argument against free will. It proceeds by assuming that determinism is true and arguing from there that we dont have free will.

Theres a big problem with the classical argument against free will. It just assumes that determinism is true. The idea behind the argument seems to be that determinism is just a commonsense truism. But its actually not a commonsense truism. One of the main lessons of 20th-century physics is that we cant know by common sense, or by intuition, that determinism is true. Determinism is a controversial hypothesis about the workings of the physical world. We could only know that its true by doing some high-level physics. Moreover and this is another lesson of 20th-century physics as of right now, we dont have any good evidence for determinism. In other words, our best physical theories dont answer the question of whether determinism is true.

During the reign of classical physics (or Newtonian physics), it was widely believed that determinism was true. But in the late 19th and early 20th centuries, physicists started to discover some problems with Newtons theory, and it was eventually replaced with a new theory quantum mechanics. (Actually, it was replaced by two new theories, namely, quantum mechanics and relativity theory. But relativity theory isnt relevant to the topic of free will.) Quantum mechanics has several strange and interesting features, but the one thats relevant to free will is that this new theory contains laws that are probabilistic rather than deterministic. We can understand what this means very easily. Roughly speaking, deterministic laws of nature look like this:

If you have a physical system in state S, and if you perform experiment E on that system, then you will get outcome O.

But quantum physics contains probabilistic laws that look like this:

If you have a physical system in state S, and if you perform experiment E on that system, then there are two different possible outcomes, namely, O1 and O2; moreover, theres a 50 percent chance that youll get outcome O1 and a 50 percent chance that youll get outcome O2.

Its important to notice what follows from this. Suppose that we take a physical system, put it into state S, and perform experiment E on it. Now suppose that when we perform this experiment, we get outcome O1. Finally, suppose we ask the following question: Why did we get outcome O1 instead of O2? The important point to notice is that quantum mechanics doesnt answer this question. It doesnt give us any explanation at all for why we got outcome O1 instead of O2. In other words, as far as quantum mechanics is concerned, it could be that nothing caused us to get result O1; it could be that this just happened.

Now, Einstein famously thought that this couldnt be the whole story. Youve probably heard that he once said that God doesnt play dice with the universe. What he meant when he said this was that the fundamental laws of nature cant be probabilistic. The fundamental laws, Einstein thought, have to tell us what will happen next, not what will probably happen, or what might happen. So Einstein thought that there had to be a hidden layer of reality, below the quantum level, and that if we could find this hidden layer, we could get rid of the probabilistic laws of quantum mechanics and replace them with deterministic laws, laws that tell us what will happen next, not just what will probably happen next. And, of course, if we could do this if we could find this hidden layer of reality and these deterministic laws of nature then we would be able to explain why we got outcome O1 instead of O2.

But a lot of other physicists most notably, Werner Heisenberg and Niels Bohr disagreed with Einstein. They thought that the quantum layer of reality was the bottom layer. And they thought that the fundamental laws of nature or at any rate, some of these laws were probabilistic laws. But if this is right, then it means that at least some physical events arent deterministically caused by prior events. It means that some physical events just happen. For instance, if Heisenberg and Bohr are right, then nothing caused us to get outcome O1 instead of O2; there was no reason why this happened; it just did.

The debate between determinists like Einstein and indeterminists like Heisenberg and Bohr has never been settled.

The debate between Einstein on the one hand and Heisenberg and Bohr on the other is crucially important to our discussion. Einstein is a determinist. If hes right, then every physical event is predetermined or in other words, completely caused by prior events. But if Heisenberg and Bohr are right, then determinism is false. On their view, not every event is predetermined by the past and the laws of nature; some things just happen, for no reason at all. In other words, if Heisenberg and Bohr are right, then indeterminism is true.

And heres the really important point for us. The debate between determinists like Einstein and indeterminists like Heisenberg and Bohr has never been settled. We dont have any good evidence for either view. Quantum mechanics is still our best theory of the subatomic world, but we just dont know whether theres another layer of reality, beneath the quantum layer. And so we dont know whether all physical events are completely caused by prior events. In other words, we dont know whether determinism or indeterminism is true. Future physicists might be able to settle this question, but as of right now, we dont know the answer.

But now notice that if we dont know whether determinism is true or false, then this completely undermines the classical argument against free will. That argument just assumed that determinism is true. But we now know that there is no good reason to believe this. The question of whether determinism is true is an open question for physicists. So the classical argument against free will is a failure it doesnt give us any good reason to conclude that we dont have free will.

Despite the failure of the classical argument, the enemies of free will are undeterred. They still think theres a powerful argument to be made against free will. In fact, they think there are two such arguments. Both of these arguments can be thought of as attempts to fix the classical argument, but they do this in completely different ways.

The first new-and-improved argument against free will which is a scientific argument starts with the observation that it doesnt matter whether the full-blown hypothesis of determinism is true because it doesnt matter whether all events are predetermined by prior events. All that matters is whether our decisions are predetermined by prior events. And the central claim of the first new-and-improved argument against free will is that we have good evidence (from studies performed by psychologists and neuroscientists) for thinking that, in fact, our decisions are predetermined by prior events.

The second new-and-improved argument against free will which is a philosophical argument, not a scientific argument relies on the claim that it doesnt matter whether determinism is true because indeterminism is just as incompatible with free will as determinism is. The argument for this is based on the claim that if our decisions arent determined, then they arent caused by anything, which means that they occur randomly. And the central claim of the second new-and-improved argument against free will is that if our decisions occur randomly, then they just happen to us, and so theyre not the products of our free will.

My own view is that neither of these new-and-improved arguments succeeds in showing that we dont have free will. But it takes a lot of work to undermine these two arguments. In order to undermine the scientific argument, we need to explain why the relevant psychological and neuroscientific studies dont in fact show that we dont have free will. And in order to undermine the philosophical argument, we need to explain how a decision could be the product of someones free will how the outcome of the decision could be under the given persons control even if the decision wasnt caused by anything.

So, yes, this would all take a lot of work. Maybe I should write a book about it.

Mark Balaguer is Professor in the Department of Philosophy at California State University, Los Angeles. He is the author of several books, including Free Will, from which this article is adapted.

Follow this link:

Why the Classical Argument Against Free Will Is a Failure - The MIT Press Reader

Posted in Quantum Physics | Comments Off on Why the Classical Argument Against Free Will Is a Failure – The MIT Press Reader

Research Assistant, Experimentation, Centre for Quantum Technologies job with NATIONAL UNIVERSITY OF SINGAPORE | 279321 – Times Higher Education (THE)

Posted: at 2:25 am

About the Centre for Quantum Technologies

The Centre for Quantum Technologies (CQT) is a research centre of excellence in Singapore. It brings together physicists, computer scientists and engineers to do basic research on quantum physics and to build devices based on quantum phenomena. Experts in this new discipline of quantum technologies are applying their discoveries in computing, communications, and sensing.

CQT is hosted by the National University of Singapore and also has staff at Nanyang Technological University. With some 180 researchers and students, it offers a friendly and international work environment.

Learn more about CQT atwww.quantumlah.org.

Job Description

The candidate will help the research team working on quantum technologies with experiment preparation and support for the current research efforts. This includes data analysis, programming as well as CAD design. The candidate will be embedded in the research group and help with the day to day experimental work.

Job Requirements

Additional Information

At NUS, the health and safety of our staff and students is one of our utmost priorities and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require significant amount of physical interactions with student / staff / public members. Even for job roles that can be performed remotely, there will be instances where on-campus presence is required.

With effect from 15 January 2022, based on Singapores legal requirements, unvaccinated workers will not be able work at the NUS premises. As such, we regret to inform that job applicants need to be fully COVID-19 vaccinated for successful employment with NUS.

MOM Updated advisory on COVID-vaccination at the Workplace, subject to changes in accordance with the national COVID-19 measures

Read the rest here:

Research Assistant, Experimentation, Centre for Quantum Technologies job with NATIONAL UNIVERSITY OF SINGAPORE | 279321 - Times Higher Education (THE)

Posted in Quantum Physics | Comments Off on Research Assistant, Experimentation, Centre for Quantum Technologies job with NATIONAL UNIVERSITY OF SINGAPORE | 279321 – Times Higher Education (THE)

Why is Silicon Valley still waiting for the next big thing? – The Straits Times

Posted: at 2:25 am

NEW YORK (NYTIMES) - In the autumn of 2019, Google told the world it had reached "quantum supremacy". It was a significant scientific milestone that some compared to the first flight at Kitty Hawk.

Harnessing the mysterious powers of quantum mechanics, Google had built a computer that needed only 3 minutes and 20 seconds to perform a calculation that normal computers could not complete in 10,000 years.

But more than two years after Google's announcement, the world is still waiting for a quantum computer that actually does something useful. And it will most likely wait much longer. The world is also waiting for self-driving cars, flying cars, advanced artificial intelligence and brain implants that will let you control your computing devices using nothing but your thoughts.

Silicon Valley's hype machine has long been accused of churning ahead of reality. But in recent years, the tech industry's critics have noticed that its biggest promises - the ideas that really could change the world - seem farther and farther on the horizon. The great wealth generated by the industry in recent years has generally been thanks to ideas, like the iPhone and mobile apps, that arrived years ago.

Have the big thinkers of tech lost their mojo?

The answer, those big thinkers are quick to respond, is absolutely not. But the projects they are tackling are far more difficult than building a new app or disrupting another ageing industry. And if you look around, the tools that have helped you cope with almost two years of a pandemic - the home computers, the videoconferencing services and Wi-Fi, even the technology that aided researchers in the development of vaccines - have shown the industry has not exactly lost a step.

"Imagine the economic impact of the pandemic had there not been the infrastructure - the hardware and the software - that allowed so many white-collar workers to work from home and so many other parts of the economy to be conducted in a digitally mediated way," said Professor Margaret O'Mara from the University of Washington who specialises in the history of Silicon Valley.

As for the next big thing, the big thinkers say, give it time. Take quantum computing. Dr Jake Taylor, who oversaw quantum computing efforts for the White House and is now chief science officer at quantum start-up Riverlane, said building a quantum computer might be the hardest task ever undertaken. This is a machine that defies the physics of everyday life.

A quantum computer relies on the strange ways that some objects behave at the sub-atomic level or when exposed to extreme cold, like metal chilled to nearly 460 degrees below zero. If scientists merely try to read information from these quantum systems, they tend to break.

While building a quantum computer, Dr Taylor said, "you are constantly working against the fundamental tendency of nature".

The most important tech advances of the past few decades - the microchip, the Internet, the mouse-driven computer, the smartphone - were not defying physics. And they were allowed to gestate for years, even decades, inside government agencies and corporate research labs before ultimately reaching mass adoption.

"The age of mobile and cloud computing has created so many new business opportunities," Prof O'Mara said. "But now there are trickier problems."

Still, the loudest voices in Silicon Valley often discuss those trickier problems as if they were just another smartphone app. That can inflate expectations.

More:

Why is Silicon Valley still waiting for the next big thing? - The Straits Times

Posted in Quantum Physics | Comments Off on Why is Silicon Valley still waiting for the next big thing? – The Straits Times

Our Universe is normal! Its biggest anomaly, the CMB cold spot, is now explained – Big Think

Posted: at 2:25 am

Ever since the discovery of the Cosmic Microwave Background (CMB) nearly 60 years ago, scientists have been searching for a hint any hint of a crack in the faade of the hot Big Bang. At every step along the way, as our instruments became more sensitive and our observational reach extended farther than ever before, the Big Bangs predictions were borne out in spectacular fashion, one after another.

The Universes expansion and how that expansion changed over time was measured, and found to be precisely consistent with the expanding Universe predicted by physical cosmology. The spectrum of the CMB was measured, confirming it was the most perfect blackbody ever seen in the Universe. The initial cosmic abundances of the light elements and their isotopes were determined, and found to be in direct agreement with the predictions of Big Bang nucleosynthesis. And the formation of large-scale structure and the growth of the cosmic web matched the Big Bangs predictions without exception.

But with the launches of WMAP and Planck, the small-scale imperfections in the CMB were measured, and one anomaly stood out: a cold spot that simply couldnt be explained based on the Universe we knew. At last, that mystery may finally be solved, as the culprit has been identified at long last: the largest supervoid in the nearby Universe. If this research holds up, it teaches us that our Universe is normal, after all, and that the CMB cold spot isnt an anomaly at all.

The fact that the CMB is so perfect is, itself, a modern wonder of the Universe. Everywhere we look, in all directions, its plain to see just how different the Universe is from place to place. Some regions of space are extremely rich in structure, with scores, hundreds, or even thousands of large galaxies all collected into the same gravitationally bound structure. Other locations still have galaxies, but theyre relatively sparsely located: in small groupings and collections scattered about through space. Still other places have only isolated galaxies, while in the least dense locations, there are no galaxies to be found at all for volumes that span tens or even hundreds of millions of light-years on a side.

And yet, the theory of Big Bang comes along with an inextricable prediction: that in the earliest stages of the hot Big Bang, the Universe must have been both isotropic, or the same in all directions, and homogeneous, or the same in all locations, to a tremendous degree of precision. It can only come into existence with tiny, minuscule imperfections, or regions of slightly greater-or-lesser density than average. Its only because of the tremendous amount of cosmic time that passes and the relentlessly attractive nature of the gravitational force that we have a rich, structure-filled Universe today.

The Cosmic Microwave Background was discovered back in the mid-1960s, and the early goals were to:

Over time, we were able to refine our measurements. Initially, the CMB was announced to be at 3.5 K, which then was revised to 3 K, then 2.7 K, and a little later, a third significant figure was added: 2.73 K. In the mid-to-late 1970s, a small, 1-part-in-800 imperfection was discovered: an artifact of our own motion through the Universe.

It wasnt until the 1990s that the first primordial imperfections were found, coming in at about the 1-part-in-30,000 level. At last, we had the observational evidence to not only confirm a Big Bang-consistent origin for the CMB, but to measure what sort of imperfections the Universe itself began with.

You see, the hot Big Bang, although it was the beginning of our observable Universe as we know it, wasnt the very beginning of everything. Theres a theory thats been around since the early 1980s cosmic inflation that posits a set of properties that the Universe possessed prior to the start of the hot Big Bang. According to inflation:

The only reason the Universe isnt perfectly, absolutely uniform everywhere is because the tiny fluctuations inherent to quantum physics, during this epoch of rapid expansion, can get stretched across the Universe, creating the overdense and underdense seeds of structure. From these initial seed fluctuations, the entire large-scale structure of the Universe can arise.

According to the theory of inflation, there should be a very specific set of fluctuations that the Universe starts with at the onset of the hot Big Bang. In particular:

All of these predictions have since been borne out and confirmed by observations, some to within the limits of our measurement precision and others quite spectacularly.

However, its always worth looking for anomalies, as no matter how thoroughly your predictions agree with reality, you must always put ahead, hoping to uncover something unexpected. After all, its the only way you can discover something new: by looking as youve never looked before. If you have specific predictions and expectations for what your Universe is going to look like, then anything that defies your expectations is at the very least worth a second look.

Perhaps the most unusual remaining feature that we see in the microwave sky, once we subtract out the effect of the Milky Way galaxy, is the fact that theres a cold spot that doesnt align with these theoretical explanations. Once weve quantified the types and scales of temperature fluctuations that ought to exist, we can correlate them together, and see how fluctuations on smaller and larger scales should be related.

In one particular region of space, we find that theres a very deep cold spot: about 70 microkelvin below the average temperature on a relatively large angular scale. Moreover, that cold spot appears to be encircled by a hotter-than-average region, making it even more anomalous. To many, the cold spot in the CMB represented a potential challenge to inflation and the standard cosmological model, as it wouldnt make sense if the Universe was somehow born with this anomalously low-temperature region.

Its important to recognize where these temperature fluctuations come from in the first place. The Universe, even at the start of the hot Big Bang, really is the exact same temperature everywhere. The thing thats different from location to location is the density of the Universe, and this is the component that has those 1-part-in-30,000 imperfections, as imprinted by inflation. The reason we observe the Universe to possess different temperatures in different regions of space is because of the phenomenon of gravitational redshift: matter curves space, and where space is more severely curved, light has to lose more energy to climb out of that gravitational potential well. In the astrophysics community, this is known as the Sachs-Wolfe effect, and its the primary cause of the temperature differences we observe in the CMB.

But theres another, more subtle effect: the integrated Sachs-Wolfe effect. As structure forms in the Universe, as gravitation brings more and more mass together, as clusters grow and voids form, and as the relative ratios of radiation, matter, and dark energy change with respect to one another, the gravitational effects of traveling into a certain region of space dont necessarily equal the gravitational effects of traveling out of that same region of space later on. The Universe evolves, structures form and become more matter-rich in some areas and more matter-poor in others, and any light passing through those regions is affected.

Imagine, if you will, that you have two different regions of space: a large-scale overdensity (like a supercluster) and a large-scale underdensity (like a great cosmic void). Now, imagine, just like in our real Universe, you have some form of dark energy: a component of the Universe that behaves differently from matter, and doesnt dilute in density as the Universe expands. Now, lets imagine what happens as the photon, traveling through space, encounters either a big overdensity or a big underdensity.

If something appears anomalously cold in the CMB, it could be because theres something wrong with our model of the Universe; thats of course the more interesting option. But it could also be, quite simply, because theres a large cosmic void in that location, and that void grew shallower as the light traveled through it because of dark energy.

Now, heres where the idea becomes testable: you cant point to a void thats too far away along the line-of-sight to explain it, because dark energy only becomes important for the Universes expansion over the past ~6 billion years or so. If one exists along this line-of-sight, it must be closer, at present, than 7.5 billion light-years.

So, what do we find when we go out and look?

Thats where the latest results from the Dark Energy Survey come in. Scientists were able to confirm that, yes, there is a supervoid there, and it may have a much higher-amplitude integrated Sachs-Wolfe effect that a typical underdensity does. While some underdensities were previously found at greater distances some 6-10 billion light-years away, they were determined to account for no more than ~20% of the effect. However, a 2015 study revealed a nearby supervoid right in that precise direction: 1.9 billion light-years away and about 0.5-1.0 billion light-years across. The most recent study confirms this void and measures its properties, finds that its the largest supervoid that exists since the onset of dark energys dominance, and suggests but doesnt yet prove that there is a causal relation between this late-time supervoid and the cold spot in the CMB.

There are many different ways to map out the large-scale structure of the Universe: from galaxy counts to gravitational lensing to the overall impact that the structure has on the background light emitted from various redshifts. In this particular case, it was the construction of a gravitational lensing map that confirmed the presence of this supervoid, which happens to be the emptiest large region of space in our nearby corner of the Universe. We cannot say for certain that this supervoid explains the full extend of the CMB cold spot, but its looking more and more likely that, once the presence of the supervoid is taken into account, what remains is no more anomalous than any other typical region of the sky.

The way well tell for sure, of course, is through better, deeper, higher-resolution imaging of this relatively large region of the sky, which spans somewhere around 40 square degrees. With the ESAs Euclid mission poised to launch just next year, in 2023, and with the Vera Rubin Observatory and NASAs Nancy Grace Roman Telescope expected to come online over the next few years, the critical data will soon be in our hands. After nearly two decades of wondering at what could have caused the CMB cold spot, we finally have our answer: the largest supervoid in the nearby Universe. All we need is a robust confirmation of what the present data strongly indicates, and this will be yet another cosmic challenge that our standard cosmological model is thoroughly capable of rising to.

Continued here:

Our Universe is normal! Its biggest anomaly, the CMB cold spot, is now explained - Big Think

Posted in Quantum Physics | Comments Off on Our Universe is normal! Its biggest anomaly, the CMB cold spot, is now explained – Big Think

Here are the Top 10 science anniversaries of 2022 – Science News Magazine

Posted: at 2:25 am

Even though its only even odds that 2022 will turn out to be less of a disaster than 2021 (or 2020), at least 2022 is the best recent year for compiling a Top 10 list of science anniversaries.

Curiously, many of those anniversaries are of deaths: the astronomer William Herschel for instance, who died in 1822; Hermann Rorschach, Alexander Graham Bell and the mathematician Sophie Bryant (all in 1922); and Louis Leakey (1972).

But there are also some notable firsts (the original slide rule, for instance) and births, including the scientist who illuminated how science could save society from devastating infectious diseases. Honorable mentions go to the birthdays of physicists Rudolf Clausius (200th), Leon Lederman (100th) and C.N. Yang (100th). They just missed edging out the oldest anniversary, a death from an earlier millennium:

Abl-Abbs al-Fal ibn tim al-Nayrz was a Persian mathematician and astronomer, probably born around A.D. 865 in the town of Nayriz (in present-day Iran), which is why he became known as al-Nayrz. He died in 922 or thereabouts (close enough for Top 10 purposes). He got a job in Baghdad with the caliph al-Mutaid, writing treatises on math and weather, among other topics.

Unfortunately, many of al-Nayrzs writings were long ago lost. But other writers mention his works and report that he was a master of astronomy and geometry. Among his surviving works is a translation and commentary on Euclids Elements. Al-Nayrz also attempted a proof of Euclids famous postulate about parallel lines never meeting. One of Al-Nayrzs treatises for the caliph discussed how to determine the distance to upright objects. Had golf been invented yet, the caliph would have used such knowledge to calculate the distance to the flagstick without need of a GPS app.

Lewis Fry Richardson, a mathematician who later turned to psychology, worked early in his career at Englands National Peat Industries. He was given the task of calculating optimal designs of drainage systems for peat moss subjected to different amounts of rain. He worked out the equations and then realized they could be applied to other problems, such as predicting the weather.

In the years leading up to World War I, he worked on a book, to be titled Weather Prediction by Numerical Process. He showed how values for temperature, humidity, air pressure and other weather data from one day could be processed by his equations to make a forecast for the next day. He took a break to be an ambulance driver during the war and then finished his book, published in 1922.

As Science News-Letter reported that year, one U.S. Weather Bureau scientist believed the book to show that meteorology has become an exact science. Unfortunately, to make the next days forecast from one days data took Richardson six weeks of calculation time. Only decades later did modern electronic computers make the mathematics of weather forecasting practical, and sometimes useful.

Headlines and summaries of the latest Science News articles, delivered to your inbox

Thank you for signing up!

There was a problem signing you up.

William Oughtred, born in England in 1575, became a priest and part-time mathematician and tutor. In 1631 he wrote a book summarizing arithmetic and algebra, which became widely popular, later earning lavish praise from Isaac Newton.

Nine years before his book, Oughtred had designed the first slide rule. In 1614 John Napier had invented logarithms, showing how multiplication could be accomplished by addition. Six years later the astronomer Edmund Gunter had the bright idea of marking numbers on a straightedge proportional to their logarithms. Multiplication could then be performed by using a compass (the caliper kind, not for finding north) to find the answer by measuring the distances between the numbers to be multiplied.

In 1622, Oughtred had the even brighter idea of placing two such rulers next to each other. Sliding one along the other to properly position the numbers of interest allowed him to read the product of a multiplication right off one of the rulers. Oughtred later designed a circular slide rule, but one of his students claimed to have had that idea first, initiating a nasty priority dispute.

Further advances in slide rule design, incorporating things like cubes and trigonometric functions, made slide rules the premier computing devices of the 19th and 20th centuries UNTIL electronic calculators came along, sadly depriving slide rules the opportunity to make it to age 400. But some people alive today once used slide rules, and probably still have one in a box somewhere.

Maria Goeppert was born in what is now Poland in 1906. Encouraged by her father, a university professor, to pursue higher education, Maria chose mathematics. But in the mid-1920s her fascination with a newfangled idea called quantum mechanics induced her to shift to physics. After earning her Ph.D., she married a chemist (Joseph Mayer) and moved to the United States. She was allowed to teach classes where her husband was on the faculty (first at Johns Hopkins, later at Columbia and then Chicago) but not offered a job of her own. She was free to pursue research projects, though, often in collaboration with her husband or other scientists, and she produced important work on many topics at the interface of quantum physics and chemistry.

She was a master of the math needed to understand spectroscopy; her studies of the light emitted by the newly discovered transuranic elements in the 1940s showed that they belonged in a chemical family related to the rare-earth elements an essential clue to the proper positioning of the transuranics in the periodic table. After World War II, she began studying nuclear physics and soon deduced the existence of a shell-like structure for the arrangement of nucleons (protons and neutrons) in the atomic nucleus. Her findings complemented similar work by Hans Jensen, with whom she later collaborated in writing a book on the nuclear shell model. Jensen and Goeppert Mayer shared the 1963 Nobel in physics for that work.

Her shell model research was aided by a suggestion from Enrico Fermi, the physicist famous for his work on the secret Manhattan Project to build the atomic bomb. That was only fair, as when Fermi disappeared from Columbia University in 1941 to work on the bomb, Goeppert Mayer was hurriedly recruited to teach his class. In 1960, Goeppert Mayer finally was awarded a full-time primetime job of her own at the University of California, San Diego, but shortly thereafter she suffered a stroke, limiting her ability to do research in the years before her death in 1972.

Niels Bohr was awarded the Nobel Prize in physics in 1922, the same year as the birth of his son Aage. Aage grew up surrounded by physicists (who came from around the world to study with his father) and so naturally became a physicist himself. During World War II, Aage accompanied his father to the United States to work on the Manhattan Project, afterwards returning to his native Denmark to earn his Ph.D. at the University of Copenhagen. During that time Aage turned his attention to a problem with the atomic nucleus.

His fathers theory that a nucleus behaves much like a drop of liquid had been applied successfully in explaining nuclear fission. But more recent work by Maria Goeppert Mayer (remember her?) showed that nuclei had an inner shell-like structure, suggesting ordered arrangements of individual particles, not collective, liquidlike behavior. Aage developed a new theoretical view, showing that his fathers view could be reconciled with Goeppert Mayers shell model. He then worked on experiments that corroborated it and shared the 1975 physics Nobel for the discovery of the connection between collective motion and particle motion in atomic nuclei and the development of the theory of the structure of the atomic nucleus based on this connection.

Born July 22, 1822 to a family of farmers in what is now the Czech Republic, Johann Mendel preferred higher education to farming, enrolling in a philosophy program properly complemented with math and physics. When the time came to return home and take charge of the family farm, he opted instead to enter a monastery (where he adopted the monastic name Gregor). He did not particularly enjoy his priestly duties, though, so he got a job as a teacher, which required him to enter the University of Vienna for advanced science education. There, in addition to more math and physics, he encountered botany. Later he returned to the monastery, where he applied his botanical skills to investigating patterns in the physical features of successive generations of pea plants.

In 1866 he published results implying the existence of differentiating characters (now known as genes) that combined in different ways when transmitted by parents to offspring. Apparently nobody very astute read his paper, not even Charles Darwin, who would have been intrigued by Mendels mention that his work was relevant to the history of the evolution of organic forms. Only at the dawn of the 20th century was Mendels work translated into English and then recognized for its importance to heredity, evolution and biology in general.

Of all the robotic spacecraft launched from Earth into space, Pioneer 10 was truly the pioneer. It was the first craft to fly beyond the orbit of Mars and the first to exceed the distance of the solar systems outermost planet, Neptune. Launched March 2, 1972, Pioneer 10s mission was to visit Jupiter to take some cool snapshots of the giant planet and a few of its moons. Pioneers escape velocity from Earth surpassed 51,000 kilometers per hour (about 32,000 miles per hour), at the time a solar system speed record for any flying machine or bird. After dodging asteroids (most of them anyway) on its journey, Pioneer 10 reached the solar systems largest planet in late 1973, passing within 131,000 kilometers (about 81,000 miles) on December 4.

Pioneer continued transmitting signals back to Earth until 1997, when budget cuts forced NASA to stop listening except for an occasional check-in. The very last signal came on January 23, 2003, from 7.6 billion miles away. By now Pioneer 10 is roughly 12 billion miles away, headed in the direction of the star Aldebaran. It will arrive in a mere 2 million years or so. If any Aldebaranians encountering it can decipher the sketches of a man and woman and the map revealing the point of origin, perhaps they will refuel it and send it back.

In a century of medical miracles, one of the earliest and most dramatic was the discovery of insulin for treating diabetes. Diabetes had been recognized as a serious disease in ancient times. By the 20th century, scientists suspected that the pancreas produced a substance that helped metabolize carbohydrates; a malfunctioning pancreas meant a person could not extract energy from carbohydrates in food, resulting in dangerously high blood sugar levels while depriving the body of needed energy. It was nearly always fatal in children, and adults diagnosed with diabetes could hope for only a few more years of life.

As Science News-Letter reported in 1922, diabetes ranked with cancer in fatality and incurability. But in that year, a young doctor reported success in treating diabetes with a substance secreted by the pancreas. That doctor, Frederick Banting, had tried the idea with dogs the year before and gave the first insulin injection to a human, a 14-year-old boy, in January 1922. Banting originally used insulin purified from animals; in the decades since, researchers have engineered more sophisticated forms for human use. But even with the animal insulin, success was so dramatic that Banting and his lab director John Macleod were awarded the Nobel Prize in physiology or medicine in 1923.

In its first year of providing news of science to the world, the organization then known as Science Service transmitted a weekly package of mimeographed pages (labeled Science News Bulletin) to newspapers and other media around the country. But soon other groups (such as libraries) as well as individuals began to request the package, and so Science Service initiated a new strategy with issue No. 50. On March 13, 1922, Science News-Letter was born, with a new masthead offering subscriptions for $5 per year, postpaid. Its first article: an account of a U.S. Department of Commerce report on the allocation of radio wavelengths. The report assured everybody that widespread use of radio for the broadcasting of public information and other matters of general interest would be forthcoming. In 1966 the magazine dropped Letter and became Science News, providing an excuse for another centennial celebration in 2066.

Born in France in December 1822, Louis Pasteur was not a precocious youth. His interests tended toward art, but later some inspiring lectures shifted his attention to chemistry, and he became one of the greatest chemists of all time. Also one of the greatest biologists. And although he received no medical education, he provided the foundation for modern medicines ability to fight disease.

Pasteurs understanding of microorganisms led to the recognition of their capacity to damage human health. His tenacity in conducting rigorous experiments and his pugnacious public promotion of his findings established the germ theory of disease and encouraged new methods of hygiene. Time after time he was called on to devise solutions for perplexing problems facing various industries. He saved the silk industry. He showed how to prevent wine from going sour, and how to make milk safe to drink. He devised vaccines for various diseases, including one to cure rabies. No one person in history is more responsible than Pasteur for preserving human health and preventing unnecessary deaths. He is lucky he was born 200 years ago, though. If he were around today, hed be getting death threats.

Originally posted here:

Here are the Top 10 science anniversaries of 2022 - Science News Magazine

Posted in Quantum Physics | Comments Off on Here are the Top 10 science anniversaries of 2022 – Science News Magazine

Can Animal Behavior Simply Be Transferred Into the Genome? – Walter Bradley Center for Natural and Artificial Intelligence

Posted: at 2:24 am

Recently, geologist Casey Luskin interviewed Eric Cassell, author of Animal Algorithms: Evolution and the Mysterious Origin of Ingenious Instincts (2021) on one of the central mysteries of biology: How do animals know things that they cant have figured out on their own? Heres the first part, with transcript and notes. Below is the second part, which looks at some how questions.

Eric Cassell is an expert in navigation systems, including GPS whose experience includes more than four decades of experience in systems engineering related to aircraft, navigation and safety. He has long had an interest in animal navigation. His model for animal navigation is the natural algorithm: The animals brain is programmed to enable navigation.

Heres Part II of our three-part series on Animal Algorithms Webinar: One of Natures Biggest Mysteries, (January 20, 2022), where a partial transcript and notes follow:

Casey Luskin: We already talked about this a little bit, the idea of path integration, where animals keep track of their compass heading and distance traveled so they can fly directly home but not necessarily along the path that they took. And you say that they can do this without necessarily following landmarks. You talk about honeybees and their ability to navigate using the suns angle. So they can learn how to navigate using the suns angle at different times of day to find their way home, regardless of what time it is. Or they can use polarized light by studying different regions of the sky to determine the position of the sun. (21:23)

This requires doing trigonometry, spherical geometry, and other complex math. They [insects] have a brain with a million neurons and I have supposedly a hundred billion neurons in my brain. And I dont think I can do those kinds of calculations in my brain. I find this all incredible.

There are cases that seem to require inherited know-how. How does a sea turtle innately know how to swim to its feeding area hundreds of miles through murky water and return to its exact nesting beach 35 years later? How do chicks of the Pacific golden plover find the Hawaiian Islands, mere specks in the trackless ocean, never having been there before? How do monarch butterflies in Canada get to the same trees in Mexico their great-grandparents wintered on? Some of these natural miracles cannot be dismissed easily with other labels like a map sense or other terms of art.

Casey Luskin: So the fact that these kinds of features evolved really just makes me wonder, how could they arise by an unguided, stepwise Darwinian process. Id love to see a stepwise evolutionary explanation for this, if it exists. And Im wondering, are you aware of attempts to explain behaviors like this through a standard typical Darwinian model? (21:58)

Eric Cassell: The short answer is no. I have not come across any name in the literature about those kinds of behaviors and how they could have evolved. I think its such a daunting task to try to explain how something is sophisticated as an algorithm, particularly a mathematical type of algorithm, could have evolved in the first place.

It has to be in the genome somehow. And then that information thats in the genome has to be encoded in a neural network when the brain develops, and then it all has to be run, as the animal is performing the behavior. So theres a lot of unanswered questions about how all that takes place. (22:42)

Casey Luskin: Figure 3.3 [in the book] it talks about the different components necessary for animal navigation and migration behavior to work. Youve got to have navigation sensor physiology, a navigation algorithm. Youve got to have destination location information, migration decision algorithm, and migratory physiology to implement all of this. And if youre missing one of those components, one of those elements, then it doesnt work. Those five separate groups of genes, and as you put it, other genetic information in the genome, all have to be there in order for these navigation and migration algorithms to work. (23:36)

So lets talk about another example you give, the Monarch butterfly, which in North America requires three generations for the migration to complete itself. And so that has to be, genetically programmed because, obviously, the butterflies that are maybe in the middle of that migration pathway how could they have learned where theyre going? They werent even alive when the migration started. So how did they know where to go? Theyve never been to the destination. to me that obviously implies Im- Im sure you- you argue this in the book, very persuasively that the information had to be pre-loaded into those organisms, when theyre born, you call it pre-loaded software. So where do they get the pre-loaded software that tells them where to go and how does this evolve by an unguided Darwinian process? (24:22)

Eric Cassell: Again, thats a really difficult question that nobody has an answer for. Theres, there are some theories out there about how, in some cases, animals might have developed a behavior, or basically learned the behavior, and then somehow that behavior gets transferred into the genome. How that happens, thats a good question. Its a theory that Ive seen, people propose, but I dont understand how it could even work, in reality because you have a behavior that somehow then gets transmitted into the gametes and the genomes. But its a serious proposal that a number of people, believe in. (25:09)

Casey Luskin: It sounds very Lamarckian So maybe there is some influence of, you know, inheritance of acquired characteristics going on here, but as you said, its yet to be demonstrated. So these sound very mysterious at the present time. (26:05)

Note: Jean-Baptiste Lamarck (17441829) was a French evolutionary thinker who held that characteristics could be acquired during the lifetime of a life form and passed on to offspring. Although at one time widely dismissed, this mechanism of evolution is becoming more widely accepted in the form of epigenetics.

Casey Luskin: Maybe 100 years ago or 2000 years ago, humans navigated much differently than they do today. So how has technology changed the way we navigate? (26:28)

Eric Cassell: Fundamentally animals are better navigators than humans. Were able to use that information and landmarks, but other than that, humans are very poor natural navigators, whereas all of these animals are actually expert navigators. Theyre all designed to perform, accurate navigation, to suit their own purposes. (27:09)

Its only been in within the last couple of hundred years that weve even developed any, any useful technology for navigation. Were basically just trying to catch up to what animals have been doing for a long time. (27:51)

Casey Luskin: I did not appreciate how important the sun is for human navigation till I moved to the Southern hemisphere during my PhD. Obviously if youre living in the Northern hemisphere, which is where I grew up, the sun is always in the south. but when I moved to South Africa, the sun is always in the north. I lived just north of the university and there were literally a couple times where I would get in my car to drive home from school and start driving in the opposite direction, south, because in my mind I was orientating myself with the sun. I knew I was supposed to go north, and for me going north meant you drive away from the sun. I didnt even think about it. I did not even appreciate how much, intuitively as a human being, I used the sun to navigate until the sun was in the wrong place and I was going in the wrong direction. (28:28)

You also talk about spider webs in your book and, theyre probably one of the most famous examples of an amazing animal behavior. How do spiders produce silk and what does the theory say about how spiders know instinctively how to produce a web. Are there evolutionary explanations for the origin of spiderwebs? If so, what do, what do you think of them? (29:19)

Eric Cassell: The question about silk, its a very complex material that involves a lot of, proteins and its a very complex process to produce the material. Humans have been trying to duplicate [spider] silk artificially for a long time. Basically weve never been able to do it because its so complex. We have some materials that sort of approximate the composition of silk, but never really duplicate it. So thats one thing there. (30:00)

And the process that the spiders use to generate it is a complex process, also. There has been a lot of research into web designs and, how they possibly could have evolved over time. But there are issues there as well because, for example, there are species of spiders that are completely unrelated, but yet produce the same exact web design. So how do you explain that? (30:42)

The typical Darwinian explanation that its convergent evolution, selection pressure, or some other vague term but really, the origin of the webs and then how spiders are able to manipulate them is really a complex behavior thats pretty sophisticated. (31:16)

Casey Luskin: I note that you provide a really striking quote in your book from Jerry Fodor and Massimo Piattelli-Palmarini from their book What Darwin Got Wrong (2010). Theyre talking about animal behavior and they say that, Such complex sequential, rigidly pre-programmed behavior could have gone wrong in many ways, at any one of its steps And they say spiderwebs, bee foraging, as we saw above and many more, cannot be accounted for by means of optimizing physical, chemical, or geometric factors. (32:26)

They go on to say that, They can hardly be accounted for by gradual adaptation either. Its fair to acknowledge that, although we bet some naturalistic explanation will one day be found, we have no such explanation at present. If we insist that natural selection is the only way to try, we will never have one. (32:59)

These are two authors who describe themselves in their book as outright, card carrying, signed up, dyed in the wool, no-holds-barred atheists. And yet theyre saying that there is no Darwinian natural selection-based explanation and theyre really doubtful there ever will be for the origin of these complex behaviors. You also talk about a textbook that says, We still know little about the rate and type of evolutionary change, experienced by behavioral traits. (33:20)

Eric Cassell: In my research in the literature, for the most part, there is only one, theres one particular type of behavior [for which] at least some theories have been proposed and that concerns insect social behavior. The basic theory is that there are, insects ants, et cetera that exhibit solitary behaviors. Theres, in other words, theres a difference between those that are social and those that are solitary. (34:14)

The theory is that when an animal transitions from a solitary lifestyle to a social lifestyle, its just a matter of adding a few algorithms, if you will, a few steps to integrating that information into a social environment. Well, at first that sounds somewhat plausible, but the evidence really is not there that thats the case, for two reasons. One is that the social behaviors that these animals exhibit far exceed the behaviors that solitary animals do. Thats one thing. (34:47)

The other is that insect social behavior is one area that has seen quite a bit of research into the genomes, And whats been found is that the genomes of the social insects have undergone significant change, when they transition from solitary to social. So theres literally hundreds of thousands of genetic changes that take place in these animals, when theyre social. So how that could have happened through a step by step linear Darwinian fashion is not very plausible. (35:28)

Casey Luskin: So, okay. Well, this will, I think, lead into my final question during our conversational part of the interview. It sounds like a lot of information goes into the origin of these animal behaviors. So how does information important for the origin of these animal behaviors and what is your view on what this implies for intelligent design? (36:18)

Eric Cassell: These behaviors, for the most part, are controlled by algorithms in one form or another. And to have an algorithm, you have to have the information. Where does information come from that even defines the algorithm in the first place? So thats the part thats challenging. A lot of the research thats been done by the ID community tends to indicate that you really cant generate information through a- random process, which is, you know, mutations and natural selection. (36:44)

Its just incapable of doing that. If you look at the work of design theorist William Dembski and some others, regarding these No Free Lunch theorems, thats basically what they say. Its difficult to explain the origin of this kind of information through a purely random process. I think thats one of the biggest hurdles to overcome in trying to explain, the origin of these kinds of behaviors. (37:24)

Next: Challenges from the audience, as well as challenges from nature

Heres the earlier portion of the episode, with transcript and notes.

Neuroscience mystery: How do tiny brains enable complex behavior? Eric Cassell notes that insects with brains of only a million neurons exhibit principles found only in the most advanced manmade navigation systems. How? Cassell argues in his recent book that an algorithm model is best suited to understanding the insect mind and that of many animals.

You may also wish to read: A navigator asks animals: How do you find your way? The results are amazing. Many life forms do math they know nothing about. The question Eric Cassell: asks is, how, exactly, is so much information packed into simple brain with so few neurons?

Go here to read the rest:
Can Animal Behavior Simply Be Transferred Into the Genome? - Walter Bradley Center for Natural and Artificial Intelligence

Posted in Genome | Comments Off on Can Animal Behavior Simply Be Transferred Into the Genome? – Walter Bradley Center for Natural and Artificial Intelligence

Genomics’ role beyond healthcare and medical research – BioNews

Posted: at 2:24 am

31 January 2022

The UK government has investigated possible future applications of genomics beyond healthcare and the potential risks involved in its growing use, published in an open report.

The technology to sequence the human genome has developed rapidly in recent years from costing 4 billion twenty years ago to only around 800 today. Genome sequencing is already widely used in the UK to screen for genetic diseases. But in their Genomics Beyond Health report, the Government Office for Science highlights how growing access to genomics could continue its use beyond health, from DNA based predictions of children's behavioural traits and educational achievement to an athlete's inherent capabilities. The report also indicates that while there are many benefits to this information, predictions based on genomics are open to misinterpretation and they raise ethical questions surrounding discrimination based on DNA.

'Now is the time to consider what might be possible, and what actions government and the public could take to ensure the widespread application of genomics can occur in a way that protects and benefits us all', said Sir Patrick Vallance, UK Government Chief Scientific Advisor.

In their 198-page report, the authors outline how genomics can help determine certain disease risks, identify suspects at crime-scenes and develop crops resistant to pests and harsh climates.

But they point to several ethical and practical issues where genomics is heading next. Genome based predictions of how well a child will perform at school could help tailor education to individual needs. But the authors note that other factors such as parental education currently predict academic performance much more accurately, yet there are no regulations in the UK to limit genomic testing marketed at parents.

Another possibility is the use of genomics in hiring to select workers with optimal health and the desired personality traits. The authors argue this would be inherently discriminatory and lack scientific grounding by disregarding environmental influence.

The report suggests that as genomics sequencing technologies become increasingly advances and increase in use more consideration should be given to policy and regulation. A structured framework governing how genomic information is collected and used could protect by law the privacy, anonymity, and security of the genome sequences of UK citizens.

'The use of genomic data outside the healthcare setting needs careful scrutiny, and safeguards are needed to protect the public from any potential misuse of their data' said Sarah Norcross, director of the Progress Educational Trust. 'This report must be acted on expeditiously, as genomics is such a fast-moving area.'

The report was produced together with thirty experts in science, technology and policy to provide a 'basis for discussion within government departments', helping engage with future issues before they arise.

More:
Genomics' role beyond healthcare and medical research - BioNews

Posted in Genome | Comments Off on Genomics’ role beyond healthcare and medical research – BioNews