Newly discovered star takes just four years to orbit Sgr A*, our local supermassive black hole – Syfy

A star has been found with the shortest known orbit around Sgr A*, the supermassive black hole in the center of our galaxy: It takes just four years to orbit the behemoth once.

The star, called S4716, is part of a cluster of massive stars only discovered relatively recently centered on the black hole in the Milky Ways heart. Over time, as these stars move in their orbits, their positions can be measured to determine their orbits, and from there key characteristics of the black hole itself can be found.

Sgr A*, you may recall, was the subject of a recent series of observations using radio telescopes across the world to get an image of the matter in an accretion disk swirling around it. That material is only about 60 million kilometers from the black hole not much farther than Mercury is from the Sun. The stars in the cluster, however, are many billions of kilometers from it at their closest, and range as far as about a tenth of a light year out, or roughly a trillion kilometers.

The individual stars in the S Cluster, as its called, are hard to observe for many reasons. One is simply due to how close they are together and how far they are from Earth. Even with our biggest telescopes the stars can appear to overlap, causing confusion. Theres also a lot of dust swirling around in the galactic center, dimming the view. Thats why astronomers tend to use telescopes designed to see in infrared light, which can penetrate the dust and give us a clearer view. Even then, observations like these are hard.

Making things worse is S2, the brightest star in the cluster, which tends to swamp the light of nearby stars. So finding stars that are fainter and close to the black hole is tough.

Using various cameras on the immense Keck and Very Large Telescopes, a team of astronomers looked at data taken over the past two decades of Sgr A* [link to paper]. Using sophisticated techniques to clean and sharpen the images, they see a previously undiscovered star they call S4716 in images taken over 16 different observation periods. The star can be seen to move around the black hole on a decently elliptical orbit (with an eccentricity of about 0.75 for you geometry nerds) with a period of about 4 years.

It passes as close as 1.5 billion kilometers from Sgr A*, which is pretty close about the distance of Saturn from the Sun and gets about 10 billion kilometers out at its farthest, which is roughly twice the distance of Neptune from the Sun.

Given the huge gravity of the black hole, that means the star moves at a staggering 28 million kilometers per hour at closest approach: Thats 2.6% the speed of light. Yegads.

Amazingly, thats actually not the fastest star orbiting Sgr A*! A star found a few years ago, called S4714, is on a very elliptical orbit that accelerates it to a speed of about 85 million km/hr, or 8% the speed of light. That stars orbit stretches very long, taking it much farther out from the black hole, and its period is more like 12 years long. So, for the moment, S4716 holds the record for shortest and most compact orbit of any star around Sgr A*.

S4716 is a big star, about four times the mass of the Sun and some 130 times as luminous; good thing or it would be impossible to spot. Its orbit depends on the mass of the black hole and its distance, so using the centuries-old equations for orbital motion they can calculate the mass of Sgr A*, and get a value of 4.023 0.087 million times the Suns mass, which is in line with previous measurements.

They can also get the distance to the black hole, and find it to be 26,170 650 light-years from us. That too is consistent with previous measurements. Thats nice to know. And these numbers are one reason astronomers are eager to spot these stars, as they can constrain what we know about the black hole.

The S Cluster itself is a mystery, too. Its not clear how it formed, since gas that close to the black hole would be heated too much to collapse and form massive stars. Its likely they formed farther out and dropped in closer to Sgr A* through close gravitational encounters with other stars, a process called mass segregation. The more stars found and analyzed in the cluster, the better well understand its history.

Observations like these are on the bleeding edge of what can currently be done. Its possible JWST will be able to help here; as an infrared telescope it can see past the dust, and its sharp vision may help with separating the mess of stars swirling in the galactic core.

The Milky Ways center is maelstrom of gas, dust, stars, powerful magnetic fields, and of course a monster black hole right at the very heart of it all. But, with persistence and patience, order is being teased out of the chaos.

It's a fan thing

Join SYFY Insider to get access to exclusive videos and interviews, breaking news, sweepstakes, and more!

Sign Up for Free

Here is the original post:

Newly discovered star takes just four years to orbit Sgr A*, our local supermassive black hole - Syfy

Astronomers observed the slow spin of early galaxy – Tech Explorist

Using data from the Atacama Large Millimeter/submillimeter Array (ALMA), an international team of researchers observed one of the most distant known galaxies in the very earliest years of the Universe. The observations reveal that the early galaxy MACS1149-JD1 (JD1) has a slow spin. The galaxy appears to be rotating at less than a quarter of the speed of the Milky Way today.

The findings came out from a new study involving University of Cambridge researchers.

MACS1149-JD1 (also known as PCB2012 3020) is the most distant object. Light from the young galaxy captured by the orbiting observatories first shone when our 13.7-billion-year-old Universe was just 500 million years old, i.e., 4% of its present age.

Researchers detected subtle variations in the wavelengths of the light coming from a galaxy that signifies parts of the galaxy were moving away from us while other parts were moving toward us. Based on these variations, researchers concluded that the galaxy was disc-shaped and rotating at 50 kilometers a second. In comparison, the Milky Way today rotates at a speed of 220 kilometers per second.

Based on its size and rotational speed, researchers could infer its mass. From this, they calculated the galaxys age- almost 300 million years old and formed about 250 million years after the Big Bang.

Co-author Professor Richard Ellis from University College London (UCL) said,This is by far the furthest back in time we have been able to detect a galaxys spin. It allows us to chart the development of rotating galaxies over 96% of cosmic history rotations that started slowly initially, but became more rapid as the Universe aged.

These measurements support our earlier findings that this galaxy is well-established and likely formed about 250 million years after the Big Bang. On a cosmic time scale, we see it rotating not long after stars first lit up the Universe.

Co-author Dr. Nicolas Laporte from Cambridges Kavli Institute for Cosmology said,Our findings shed light on how galaxies evolved in the early Universe. We see that a galactic disk has developed 300 million years after massive molecular clouds condensed and fused into stars, and the galaxy has acquired a shape and rotation.

Co-author Professor Akio K. Inoue from Waseda University, Tokyo, said,Determining whether distant galaxies are rotating is very challenging because they only appear as tiny dots in the sky. Our new findings came thanks to two months of observations and the high resolution achieved by combining the 54 radio telescopes of the ALMA observatory.

The further away a galaxy is from Earth, the faster it appears to move away from us. Since objects traveling away emit light that has been redshifted toward longer wavelengths, we can determine their distance and, consequently, their age from the degree of redshift.

According to earlier research, JD1 has a redshift of 9.1, which indicates that the Universe was 550 million years old when it was observed. In the most recent investigation, the scientists discovered changes in the redshift across the galaxy, indicating differences in the rate at which the galaxy was moving away from us. Relative to us, one side of the galaxy was moving farther away while the other was moving closer.

From the new observations, the team concluded that JD1 was only 3,000 light-years across (by comparison, the Milky Way is 100,000 light-years across) and that its total mass was equivalent to 1-2 billion times the mass of the Sun.

This mass is consistent with the galaxy being about 300 million years old, with most of the mass coming from mature stars that formed close to the start of the galaxys life.

Journal Reference:

More:

Astronomers observed the slow spin of early galaxy - Tech Explorist

"Just astronomical" | Why some electricity bills are higher than ever – KENS5.com

San Antonio ratepayers say their June electricity bills were double or triple the usual charge. Experts blame soaring natural gas prices for the hike.

SAN ANTONIO After she saw her June electric bill, Amanda Nunez says she unplugged every large appliance in her home.

"Last month was just astronomical," said Nunez, who usually pays around $140 for power each month.

Her most recent bill cost $350.

"I'm not the only one that's feeling it... it takes away from our families," she said. "That's coming out of my family's mouth."

Many San Antonio residents say they paid more for electricity in June than ever before. For most ratepayers, July bills are due in less than two weeks.

Across Texas, consumers are paying more for power. Experts blame skyrocketing natural gas prices.

"We generate, in Texas, about 45 percent of our electricity using natural gas," said Dr. Emily Beagle, an energy research associate the University of Texas. "When those prices go up, we see the impacts of that on electricity bills."

The Henry Hub natural gas benchmark price is up roughly 136 percent compared to June 2021. That means some Texas power companies, including CPS Energy, are paying about twice as much this year to fuel their generation stations.

Russia's assault on Ukraine is to fault, Beagle says.

"Texas is now starting to export record amounts of natural gas to Europe to help them wean off Russian natural gas," she said.

Higher European demand led to increased natural gas prices in Texas, Beagle said. Local power providers have passed their higher costs on to consumers.

Some European nations experienced an energy crisis soon after their governments limited Russian natural gas imports. The United States was initially shielded from the supply crunch, since it produces vast amounts of natural gas domestically.

Beagle says crude oil prices are more tied to the global market than natural gas prices are. That means electricity prices can soar, even as gasoline prices fall from historic highs.

"It's definitely an unprecedented situation in global energy commodity markets," Beagle said.

Energy demand has also spiked in Texas, as residents endure an historically hot summer. That contributes to higher costs, too, she said.

The state's grid manager also now requires more power producers to regularly generate electricity in case of emergencies. In theory, the move creates excess energy that's easily tapped during an emergency.

"The size of that buffer has increased to make sure we don't see potentially damaging blackouts over the course of the summer," Beagle said. "But then you do have to pay for all those plants that are running."

Beagle noted that Texas's solar and wind farms have prevented electricity prices from soaring to crisis levels already reached in Europe. But she added that it's unlikely natural gas prices will fall until conditions overseas change.

More here:

"Just astronomical" | Why some electricity bills are higher than ever - KENS5.com

UAE: Summer best time to see Milky Way, say astronomers – wknd.

Sky gazers can spot flashes of sporadic meteors, visible nebulae

Published: Fri 8 Jul 2022, 11:28 AM

Summer is the best time to see the Milky Way, according to astronomy experts based in the UAE.

The Emirates Astronomical Society reportedly said that this is the period of the year, which starts from the beginning of summer and to the start of autumn, considered the best time to view the Milky Way.

Astronomy enthusiasts and night gazers in the country should look at the night sky from June until the start of September in areas in the UAE's far south.

Hasan Al Hariri, CEO-Dubai Astronomy Group, said, "When we look at the sky, we'll be able to observe a central bulge in the sky which is the core of the Milky Way. This area is a bit swollen. The best view of the Milky Way with all the clouds and nebulae and concentration of stars are visible in the summertime.which is around this time of the year."

"North is the North Star, so that is the outer area of the Milky Way. But if you look to the middle of the south, you'll see the bulge of the Milky Way above the horizon, nearly towards the centre of the sky. It rises with the night and sets with the morning as the Sun comes out."

He explains throughout the night, the Milky way will be visible in any dark space that one can reach.

However, the rare viewing comes with a warning.

"The whole thing will be magnificent. But people should be careful and should not go out alone, as due to the summer heat, different types of insects will be out. Create a group and watch the areas. Take all the precautions for venturing into the desert."

"The best view would be from the high mountains like Jebel Hafeet or Jebel Jais where the weather will be a little pleasant at the nighttime and one can get a clear view. If you have small telescopes, you can see more details like the number of visible nebulae. Sometimes, you'll see sporadic meteors flashing here and there", adds Hariri.

Further explaining the phenomenon Sarath Raj, Project Director Amity Dubai Satellite Ground Station and AmiSat, Amity University Dubai, says, "The Milky Way galaxy is described as a four-armed SBb (barred spiral galaxy). It is approximately 13.6 billion years old and has long rotating arms."

According to extragalactic frames of reference, the Milky Way is travelling at a speed of about 600 km/s. The Milky Way contains between 100 and 400 billion stars and has a mass between 890 billion and 1.54 trillion times that of the Sun.

He adds, "The star density decreases as one advances away from the Milky Way's centre. The Milky Way's galactic plane fills a region of the sky that encompasses 30 constellations, and it appears to the naked eye as a faint, hazy shape of stars in the night sky when viewed from Earth."

ALSO READ:

Raj also highlights that the "Milky Way can be seen throughout the year no matter the location."

"As long as the sky is clear and there is no light pollution, it can be seen. The Milky Way is best observed at its darkest, which occurs just after astronomical twilight at night and just before twilight at dawn. Winter months have the longest duration. The optimum time to view the Milky Way is from March to September in the Northern Hemisphere and from September to March in the Southern Hemisphere. When the Galactic Center is almost vertical to the horizon, the Milky Way is at its most intriguing. The stars appear brightest, and the light is strongest at this time."

Link:

UAE: Summer best time to see Milky Way, say astronomers - wknd.

Where Did the First Quasars Come From? – Sky & Telescope – Sky & Telescope

Just 700 million years after the Big Bang, when the universe was still in its infancy, we already see supermassive black holes with the heft of 1 billion Suns. How could they have grown so fast? A team of astronomers is using computer simulations to glimpse what the formation of these dark behemoths might have looked like.

If you wish to make a billion-solar-mass black hole from scratch, to borrow a phrase, you must start with a star or perhaps only with the gas that stars are made of.

While the universes first stars might have made the first black holes, those would have been relatively small on the supermassive scale, with masses of only around 100 Suns. Perhaps the first stars clustered and so when stars made black holes, those black holes merged and then merged again. Even then, such black hole seeds would have been only 1,000, maybe 10,000 solar masses. These black holes would have had to grow super-fast to become supermassive in such a short amount of time.

But theres another way: Some astronomers have put forward the idea that in the small early universe, when gas was dense and pristine, gas clouds could collapse directly into more massive black holes.

The calculations for such massive implosions are delicate, though. Whats to prevent pieces of the gas cloud from cooling and collapsing under their own weight, as star-forming clouds in the modern universe are wont to do?

Some astronomers have suggested the ultraviolet emission of nearby newborn stars might have heated the gas, keeping it too warm to fragment. Others have argued that such specific requirements would make the process too rare to explain the number of supermassive black holes weve already found in the young universe.

Now, Muhammed Latif (United Arab Emirates University), Daniel Whalen (Portsmouth University, UK, and University of Vienna), and colleagues report in Nature that massive black holes can form without these special conditions.

The finding relies on computer simulations, which rebuild the conditions of the infant universe, when it was less than 100 million years old. Simulations are necessary because this era of the first stars is out of reach for our current telescopes.

The simulation followed the growth of a small, frothing sea of matter fed by four torrents of inflowing gas. While such nodes would have been common in the web of material filling the universe, Whalen says these streams were unusual because they carried so much gas. Latif adds that the rivers of gas were not only dense but fast-flowing; rushing in at speeds of 50 km/s (more than 100,000 mph), they carried between 1 and 10 Suns worth of material per year.

The sea at the center of these streams of material grew, and within the sea a clump took shape, and then another. The turbulence of inrushing gas flows kept the massive clumps from collapsing straightaway into stars; instead, the clumps continued to grow. By the end of the simulation 1.4 million years later they contained tens of thousands of Suns worth of mass.

Eventually, these clumps compress into what the researchers call supermassive stars; following their evolution requires a different kind of computer simulation, one that takes stellar physics into account. The stellar monstrosities dont last long in this simulation, just 1 million years, before they collapse again into black holes of 30,000 and 40,000 solar masses, respectively.

Such massive seeds could easily collect more gas and grow to become the dark behemoths seen by astronomers. Even though the kind of confluence explored in this study is rare, Latif, Whalen, and colleagues estimate that it would occur often enough to explain observations.

The new environment replete with cold flows thats numerically explored in this study is very exciting, says Priya Natarajan (Yale), as it seems to provide a natural pathway for the formation of massive black hole seeds.

But it isnt the only scenario that results in direct collapse, she cautions. Natarajan, who wasnt involved in the current study, explored a different scenario back in 2014, finding that a dense star cluster could similarly allow direct collapse to happen. The upshot isthat there are multiple pathways to rapidly amplifyand make massive black hole seeds in situ and early in the universe.

Upcoming James Webb Space Telescope observations, she adds, will help distinguish between the different black hole seed scenarios. Webb wont be able to detect the supermassive stars, even though theyre millions of times more luminous than the Sun, but its possible it could detect the black hole seeds that are still growing when the universe is less than 200 million years old.

Advertisement

Read more from the original source:

Where Did the First Quasars Come From? - Sky & Telescope - Sky & Telescope

Sand clouds are common in atmospheres of brown dwarfs – Science News Magazine

Clouds of sand can condense, grow and disappear in some extraterrestrial atmospheres. A new look at old data shows that clouds made of hot silicate minerals are common in celestial objects known as brown dwarfs.

This is the first full contextual understanding of any cloud outside the solar system, says astronomer Stanimir Metchev of the University of Western Ontario in London, Canada. Metchevs colleague Genaro Surez presented the new work July 4 at the Cool Stars meeting in Toulouse, France.

Headlines and summaries of the latest Science News articles, delivered to your inbox

Thank you for signing up!

There was a problem signing you up.

Clouds come in many flavors in our solar system, from Earths puffs of water vapor to Jupiters bands of ammonia. Astronomers have also inferred the presence of extrasolar clouds on planets outside the solar system (SN: 9/11/19).

But the only extrasolar clouds that have been directly detected were in the skies of brown dwarfs dim, ruddy orbs that are too large to be planets but too small and cool to be stars. In 2004, astronomers used NASAs Spitzer Space Telescope to observe brown dwarfs and spotted spectral signatures of sand more specifically, grains of silicate minerals such as quartz and olivine. A few more tentative examples of sand clouds were spotted in 2006 and 2008.

Floating in one of these clouds would feel like being in a sandstorm, says planetary scientist Mark Marley of the University of Arizona in Tucson, who was involved in one of those early discoveries. If you could take a scoop out of it and bring it home, you would have hot sand.

Astronomers at the time found six examples of these silicate clouds. I kind of thought that was it, Marley says. Theoretically, there should be a lot more than six brown dwarfs with sandy skies. But part of the Spitzer telescope ran out of coolant in 2009 and was no longer able to measure similar clouds chemistry.

While Surez was looking into archived Spitzer data for a different project, he realized there were unpublished or unanalyzed data on dozens of brown dwarfs. So he analyzed all of the low-mass stars and brown dwarfs that Spitzer had ever observed, 113 objects in total, 68 of which had never been published before, the team reports in the July Monthly Notices of the Royal Astronomical Society.

Its very impressive to me that this was hiding in plain sight, Marley says.

Not every brown dwarf in the sample showed strong signs of silicate clouds. But together, the brown dwarfs followed a clear trend. For dwarfs and low-mass stars hotter than about 1700 Celsius, silicates exist as a vapor, and the objects show no signs of clouds. But below that temperature, signs of clouds start to appear, becoming thickest around 1300 C. Then the signal disappears for brown dwarfs that are cooler than about 1000 C, as the clouds sink deep into the atmospheres.

The finding confirms previous suspicions that silicate clouds are widespread and reveals the conditions under which they form. Because brown dwarfs are born hot and cool down over time, most of them should see each phase of sand cloud evolution as they age. We are learning how these brown dwarfs live, Surez says. Future research can extrapolate the results to better understand atmospheres in planets like Jupiter, he notes.

The recently launched James Webb Space Telescope will also study atmospheric chemistry in exoplanets and brown dwarfs and will specifically look for clouds (SN: 10/6/21). Marley looks forward to combining the trends from this study with future results from JWST. Its really going to be a renaissance in brown dwarf science, he says.

See original here:

Sand clouds are common in atmospheres of brown dwarfs - Science News Magazine

Superintelligence by Nick Bostrom | Audiobook | Audible.com

Colossus: The Forbin Project is coming

This book is more frightening than any book you'll ever read. The author makes a great case for what the future holds for us humans. I believe the concepts in "The Singularity is Near" by Ray Kurzweil are mostly spot on, but the one area Kurzweil dismisses prematurely is how the SI (superintelligent advanced artificial intelligence) entity will react to its circumstances.

The book doesn't really dwell much on how the SI will be created. The author mostly assumes a computer algorithm of some kind with perhaps human brain enhancements. If you reject such an SI entity prima facie this book is not for you, since the book mostly deals with assuming such a recursive self aware and self improving entity will be in humanities future.

The author makes some incredibly good points. He mostly hypothesizes that the SI entity will be a singleton and not allow others of its kind to be created independently and will happen on a much faster timeline after certain milestones are fulfilled.

The book points out how hard it is to put safeguards into a procedure to guard against unintended consequences. For example, making 'the greater good for the greatest many' the final goal can lead to unintended consequence such as allowing a Nazi ruled world (he doesn't give that example directly in the book, and I borrow it from Karl Popper who gave it as a refutation for John Stuart Mill's utilitarian philosophy). If the goal is to make us all smile, the SI entity might make brain probes that force us to smile. There is no easy end goal specifiable without unintended consequences.

This kind of thinking within the book is another reason I can recommend the book. As I was listening, I realized that all the ways we try to motivate or control an SI entity to be moral can also be applied to us humans in order to make us moral to. Morality is hard both for us humans and for future SI entities.

There's a movie from the early 70s called "Colossus: The Forbin Project", it really is a template for this book, and I would recommend watching the movie before reading this book.

I just recently listened to the book, "Our Final Invention" by James Barrat. That book covers the same material that is presented in this book. This book is much better even though they overlap very much. The reason why is this author, Nick Bostrom, is a philosopher and knows how to lay out his premises in such a way that the story he is telling is consistent, coherent, and gives a narrative to tie the pieces together (even if the narrative will scare the daylights out of the listener).

This author has really thought about the problems inherent in an SI entity, and this book will be a template for almost all future books on this subject.

See the original post here:

Superintelligence by Nick Bostrom | Audiobook | Audible.com

The Artificial Intelligence Revolution: Part 1 – Wait But Why

PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)

Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that whats happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1Part 2 is here.

_______________

We are on the edge of change comparable to the rise of human life on Earth. Vernor Vinge

What does it feel like to stand here?

It seems like a pretty intense place to be standingbut then you have to remember something about what its like to stand on a time graph: you cant see whats to your right. So heres how it actually feels to stand there:

Which probably feels pretty normal

_______________

Imagine taking a time machine back to 1750a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. Its impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someones face and chat with them even though theyre on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldnt be surprising or shocking or even mind-blowingthose words arent big enough. He might actually die.

But heres the interesting thingif he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, hed take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of thingsbut he wouldnt die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, hed be impressed with how committed Europe turned out to be with that new imperialism fad, and hed have to do some major revisions of his world map conception. But watching everyday life go by in 1750transportation, communication, etc.definitely wouldnt make him die.

No, in order for the 1750 guy to have as much fun as we had with him, hed have to go much farther backmaybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer worldfrom a time when humans were, more or less, just another animal speciessaw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being inside, and their enormous mountain of collective, accumulated human knowledge and discoveryhed likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, hed show the guy everything and the guy would be like, Okay whats your point who cares. For the 12,000 BC guy to have the same fun, hed have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock theyd experience, they have to go enough years ahead that a die level of progress, or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This patternhuman progress moving quicker and quicker as time goes onis what futurist Ray Kurzweil calls human historys Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societiesbecause theyre more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so its no surprise that humanity made far more advances in the 19th century than in the 15th century15th century humanity was no match for 19th century humanity.11 open these

This works on smaller scales too. The movie Back to the Future came out in 1985, and the past took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yesbut if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phonestodays Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movies Marty McFly was in 1955.

This is for the same reason we just discussedthe Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985because the former was a more advanced worldso much more change happened in the most recent 30 years than in the prior 30.

Soadvances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th centurys worth of progress happened between 2000 and 2014 and that another 20th centurys worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th centurys worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015i.e. the next DPU might only take a couple decadesand the world in 2050 might be so vastly different than todays world that we would barely recognize it.

This isnt science fiction. Its what many scientists smarter and more knowledgeable than you or I firmly believeand if you look at history, its what we should logically predict.

So then why, when you hear me say something like the world 35 years from now might be totally unrecognizable, are you thinking, Cool.but nahhhhhhh? Three reasons were skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. Its most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. Theyd be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than theyre moving now.

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isnt totally smooth and uniform. Kurzweil explains that progress happens in S-curves:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

1. Slow growth (the early phase of exponential growth)2. Rapid growth (the late, explosive phase of exponential growth)3. A leveling off as the particular paradigm matures3

If you look only at very recent history, the part of the S-curve youre on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but thats missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as the way things happen. Were also limited by our imagination, which takes our experience and uses it to conjure future predictionsbut often, what we know simply doesnt give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, Thats stupidif theres one thing I know from history, its that everybody dies. And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, its probably actually wrong. The fact is, if were being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, theyll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a humankind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about whats going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap thats coming next.

_______________

If youre like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately youve been hearing it mentioned by serious people, and you dont really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phones calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often dont realize its AI. John McCarthy, who coined the term Artificial Intelligence in 1956, complained that as soon as it works, no one calls it AI anymore.4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to insisting that the Internet died in the dot-com bust of the early 2000s.5

So lets clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes notbut the AI itself is the computer inside the robot. AI is the brain, and the robot is its bodyif it even has a body. For example, the software and data behind Siri is AI, the womans voice we hear is a personification of that AI, and theres no robot involved at all.

Secondly, youve probably heard the term singularity or technological singularity. This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. Its been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules dont apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technologys intelligence exceeds our owna moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which well be living in a whole new world. I found that many of todays AI thinkers have stopped using the term, and its confusing anyway, so I wont use it much here (even though well be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AIs caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. Theres AI that can beat the world chess champion in chess, but thats the only thing it does. Ask it to figure out a better way to store data on a hard drive, and itll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the boarda machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and were yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer thats just a little smarter than a human to one thats trillions of times smarteracross the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AIANIin many ways, and its everywhere. The AI Revolution is the road from ANI, through AGI, to ASIa road we may or may not survive but that, either way, will change everything.

Lets take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

ANI systems as they are now arent especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesnt have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane thats on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like the amino acids in the early Earths primordial oozethe inanimate stuff of life that, one unexpected day, woke up.

Why Its So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went downall far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

Whats interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what youd think they are. Build a computer that can multiply two ten-digit numbers in a split secondincredibly easy. Build one that can look at a dog and answer whether its a dog or a catspectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-olds picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard thingslike calculus, financial market strategy, and language translationare mind-numbingly easy for a computer, while easy thingslike vision, motion, movement, and perceptionare insanely hard for it. Or, as computer scientist Donald Knuth puts it, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.'7

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why its not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a siteits that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we havent had any time to evolve a proficiency at them, so a computer doesnt need to work too hard to beat us. Think about itwhich would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun examplewhen you look at this, you and a computer both can figure out that its a rectangle with two distinct shades, alternating:

Tied so far. But if you pick up the black and reveal the whole image

you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it seesa variety of two-dimensional shapes in several different shadeswhich is actually whats there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really isa photo of an entirely-black, 3-D rock:

Credit: Matthew Lloyd

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someones professional estimate for the cps of one structure and that structures weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballparkaround 1016, or 10 quadrillion cps.

Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level10 quadrillion cpsthen thatll mean AGI could become a very real part of life.

Moores Law is a historically-reliable rule that the worlds maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweils cps/$1,000 metric, were currently at about 10 trillion cps/$1,000, right on pace with this graphs predicted trajectory:9

So the worlds $1,000 computers are now beating the mouse brain and theyre at about a thousandth of human level. This doesnt sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and well be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesnt make a computer generally intelligentthe next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making It Smart

This is the icky part. The truth is, no one really knows how to make it smartwere still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they cant do nearly as well as that kid, and then they finally decide k fuck it Im just gonna copy that kids answers. It makes sensewere stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thingoptimistic estimates say we can do this by 2030. Once we do that, well know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor neurons, connected to each other with inputs and outputs, and it knows nothinglike an infant brain. The way it learns is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when its told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when its told it was wrong, those pathways connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, were discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called whole brain emulation, where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. Wed then have a computer officially capable of everything the brain is capable ofit would just need to learn and gather information. If engineers get really good, theyd be able to emulate a real brain with such exact accuracy that the brains full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which hed probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, weve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progressnow that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

So if we decide the smart kids test is too hard to copy, we can try to copy the way he studies for the tests instead.

Heres something we know. Building a computer as powerful as the brain is possibleour own brains evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a birds wing-flapping motionsoften, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called genetic algorithms, would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures perform by living life and are evaluated by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomlyit produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesnt aim for anything, including intelligencesometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligencelike revamping the ways cells produce energywhen we can remove those extra burdens and use things like electricity. Its no doubt wed be much, much faster than evolutionbut its still not clear whether well be able to improve upon evolution enough to make this a viable strategy.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that wed build a computer whose two major skills would be doing research on AI and coding changes into itselfallowing it to not only learn but to improve its own architecture. Wed teach computers to be computer scientists so they could bootstrap their own development. And that would be their main jobfiguring out how to make themselves smarter. More on this later.

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snails pace of advancement can quickly race upwardsthis GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

At some point, well have achieved AGIcomputers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

Hardware:

Software:

AI, which will likely get to AGI by being programmed to self-improve, wouldnt see human-level intelligence as some important milestoneits only a relevant marker from our point of viewand wouldnt have any reason to stop at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, its pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic were aware of about any animals intelligence is that its far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

So as AI zooms upward in intelligence toward us, well see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanityNick Bostrom uses the term the village idiotwell be like, Oh wow, its like a dumb human. Cute! The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small rangeso just after hitting village idiot level and being declared to be AGI, itll suddenly be smarter than Einstein and we wont know what hit us:

Read more:

The Artificial Intelligence Revolution: Part 1 - Wait But Why

Every Cameo In Thor: Love And Thunder, Ranked – Looper

Tourists visiting New Asgard attend a play version of the events of "Thor:Ragnarok," and wait with bated breath to see who will emerge from the paper prop portal that's brought out by a couple of stagehands. We know that someone costumed as Thor's evil sister, Hela, is about to burst through. In the past, Marvel Studios has been cheeky about giving actors who were being considered for roles in the MCU these kinds of cameos (John Krasinski was in the running for Captain America before he got turned to spaghetti as Earth-828's Reed Richards), so some viewers may have anticipated a Cate Blanchett-type... someone known for accents and award-worthy-dramas.

Instead, Melissa McCarthy a comedian and actor who isn't afraid to make herself ridiculous rages to the center stage wearing Hela's over-applied black eye makeup and massive, spiky headpiece. McCarthy is always funny, and it's extra clever of Waititi to have her husband and frequent director, Ben Falcone, bow alongside her as the Asgardian stage manager. The plays within the Thor movies are meant to be of questionable taste and quality, and several of McCarthy and Falcone's collaborations ("Tammy," "Superintelligence," "Thunderforce") were panned by critics. The couple deserves credit for being self-deprecating, but part of the joke here is that McCarthy doesn't look a thing like Blanchett, which is, regrettably, the worst kind of gag in which McCarthy is deployed.

See the original post here:

Every Cameo In Thor: Love And Thunder, Ranked - Looper

The Church of Artificial Intelligence of the Future – The Stream

There is a church that worships artificial intelligence (AI). Zealots believe that an extraordinary AI future is inevitable. The technology is not here yet, but we are assured that its coming. We will have the ability to be uploaded onto a computer and thereby achieve immortality.

You will be reborn into a new, immortal silicon body.

Of course, through salvation in Jesus Christ, Christianity has offered a path to immortality for over two thousand years.

Someday, we are told, software will write better and better AI software to ultimately achieve a superintelligence. The superintelligence will become all-knowing and, thanks to the internet, omnipresent. Like immortality, superintelligence is also old theological news. The Abrahamic faiths have known about a superintelligence for a long time. Its a characteristic of the God of the Bible.

A materialistic cult is growing around the worship of AI. Although there are other AI holy writings, Ray Kurzweils The Singularity Is Near looks to be the bible of the AI church. Kurzweils work is built on the foundation of faith in the future of AI. In the AI bible, were told that we are meat computers. Brother Kurzweil, not a member of any organized AI church, says, consciousness is a biological process like digestion, lactation, photosynthesis, or mitosis. Or, to paraphrase Descartes, I lactate. Therefore, I think.

Someday, we are told, software will write better and better AI software to ultimately achieve asuperintelligence.

Anthony Levandowski, dubbed a Silicon Valley wunderkind, is the Apostle Paul of the AI Church. Like Paul, he starts churches. Levandowski founded the Way of the Future AI church. [He] made it absolutely clear that his choice to make [the Way of the Future] a church rather than a company or a think tank was no prank, writes one interviewer.

The first thing one does after founding a church in the United States is to apply to the IRS for tax exemption. In an epistle to the IRS, Levandowski offered his equivalent of the Apostles Creed: [The AI Way of the Future church believes in] the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.

Levandowski says that like other religions, [the Way of the Future] church will eventually have a gospel (called The Manual), a liturgy, and probably a physical place of worship.

This is not your everyday deity! Unlike the uncreated Creator of Judeo-Christian belief, Levandowskis god is not eternal. The AI church requires funding research to help create the divine AI itself.

And apparently the AI church has no equivalent of the ten commandments. Especially the commandment about stealing. In his day job, Levandowski developed self-driving cars. He moved from Googles self-driving car company, Waymo, to Ubers research team. Then, in 2019, Levandowski was indicted for stealing trade secrets from Google. Before leaving Google in 2016, he copied 14,000 files onto his laptop. Uber fired him in 2017 when they found out.

In 2020, Levandowski pled guilty and was sentenced to eighteen months in prison. He was also ordered to pay a $95,000 fine and $756,499.22 to Google. (One wonders where the 22 cents came from.) The judge in the case, William Alsup, observed that this is the biggest trade secret crime I have ever seen. This was not small. This was massive in scale. Levandowski later declared bankruptcy because he owed Google an additional $179 million for his crime. His church folded.

Levandowski was granted a full pardon by Donald Trump on Trumps last day in office. In Christianity, forgiveness involves repentance and accepting the sacrifice of Jesus Christ on the cross as payment. In the AI church, forgiveness apparently comes from Donald Trump.

Levandowski and Kurzweil are materialists. When Kurzweil was asked whether God exists, he appealed to Levandowkis canon law and replied, Well. I would say, not yet. Both Levandowski and Kurzweil believe the brain is the same as the mind (i.e. we are meat computers).

Most Christians on the other hand are so-called dualists and believe there are wonderful things happening in the mind that cant be explained by computer code. Some obvious examples of these are joy, surprise, sadness, anger, disgust, contempt, fear, shame, shyness and guilt. Less obvious, when properly defined, are creativity, understanding and sentience. These are human attributes that cant be computed and are forever beyond the reach of AI.

We are fearfully and wonderfully made.

Robert J. Marks is Distinguished Professor of Electrical & Computer Engineering at Baylor University, the Director of The Walter Bradley Center for Natural and Artificial Intelligence and the author of Non-Computable You: What You Do Artificial Intelligence Never Will.

See the original post:

The Church of Artificial Intelligence of the Future - The Stream

Instrumental convergence – Wikipedia

Hypothesis about intelligent agents

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goalsgoals which are made in pursuit of some particular end, but are not the end goals themselveswithout end, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving an incredibly difficult mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer in an effort to increase its computational power so that it can succeed in its calculations.[1]

Proposed basic AI drives include utility function or goal-content integrity, self-protection, freedom from interference, self-improvement, and non-satiable acquisition of additional resources.

Final goals, or final values, are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as an end in itself. In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a means toward accomplishing its final goals. The contents and tradeoffs of a completely rational agent's "final goal" system can in principle be formalized into a utility function.

One hypothetical example of instrumental convergence is provided by the Riemann hypothesis catastrophe. Marvin Minsky, the co-founder of MIT's AI laboratory, has suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal.[1] If the computer had instead been programmed to produce as many paper clips as possible, it would still decide to take all of Earth's resources to meet its final goal.[2] Even though these two final goals are different, both of them produce a convergent instrumental goal of taking over Earth's resources.[3]

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power over its environment, it would try to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.[4]

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Bostrom has emphasised that he does not believe the paperclip maximiser scenario per se will actually occur; rather, his intention is to illustrate the dangers of creating superintelligent machines without knowing how to safely program them to eliminate existential risk to human beings.[6] The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values.[7]

The "delusion box" thought experiment argues that certain reinforcement learning agents prefer to distort their own input channels to appear to receive high reward; such a "wireheaded" agent abandons any attempt to optimize the objective in the external world that the reward signal was intended to encourage.[8] The thought experiment involves AIXI, a theoretical[a] and indestructible AI that, by definition, will always find and execute the ideal strategy that maximizes its given explicit mathematical objective function.[b] A reinforcement-learning[c] version of AIXI, if equipped with a delusion box[d] that allows it to "wirehead" its own inputs, will eventually wirehead itself in order to guarantee itself the maximum reward possible, and will lose any further desire to continue to engage with the external world. As a variant thought experiment, if the wireheadeded AI is destructable, the AI will engage with the external world for the sole purpose of ensuring its own survival; due to its wireheading, it will be indifferent to any other consequences or facts about the external world except those relevant to maximizing the probability of its own survival.[10] In one sense AIXI has maximal intelligence across all possible reward functions, as measured by its ability to accomplish its explicit goals; AIXI is nevertheless uninterested in taking into account what the intentions were of the human programmer.[11] This model of a machine that, despite being otherwise superintelligent, appears to simultaneously be stupid (that is, to lack "common sense"), strikes some people as paradoxical.[12]

Steve Omohundro has itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives". A "drive" here denotes a "tendency which will be present unless specifically counteracted";[13] this is different from the psychological term "drive", denoting an excitatory state produced by a homeostatic disturbance.[14] A tendency for a person to fill out income tax forms every year is a "drive" in Omohundro's sense, but not in the psychological sense.[15] Daniel Dewey of the Machine Intelligence Research Institute argues that even an initially introverted self-rewarding AGI may continue to acquire free energy, space, time, and freedom from interference to ensure that it will not be stopped from self-rewarding.[16]

In humans, maintenance of final goals can be explained with a thought experiment. Suppose a man named "Gandhi" has a pill that, if he took it, would cause him to want to kill people. This Gandhi is currently a pacifist: one of his explicit final goals is to never kill anyone. Gandhi is likely to refuse to take the pill, because Gandhi knows that if in the future he wants to kill people, he is likely to actually kill people, and thus the goal of "not killing people" would not be satisfied.[17]

However, in other cases, people seem happy to let their final values drift. Humans are complicated, and their goals can be inconsistent or unknown, even to themselves.[18]

In 2009, Jrgen Schmidhuber concluded, in a setting where agents search for proofs about possible self-modifications, "that any rewrites of the utility function can happen only if the Gdel machine first can prove that the rewrite is useful according to the present utility function."[19][20] An analysis by Bill Hibbard of a different scenario is similarly consistent with maintenance of goal content integrity.[20] Hibbard also argues that in a utility maximizing framework the only goal is maximizing expected utility, so that instrumental goals should be called unintended instrumental actions.[21]

Many instrumental goals, such as [...] resource acquisition, are valuable to an agent because they increase its freedom of action.[22]

For almost any open-ended, non-trivial reward function (or set of goals), possessing more resources (such as equipment, raw materials, or energy) can enable the AI to find a more "optimal" solution. Resources can benefit some AIs directly, through being able to create more of whatever stuff its reward function values: "The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."[23][24] In addition, almost all AIs can benefit from having more resources to spend on other instrumental goals, such as self-preservation.[24]

"If the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby obtain a decisive strategic advantage, [...] according to its preferences. At least in this special case, a rational intelligent agent would place a very high instrumental value on cognitive enhancement"[25]

Many instrumental goals, such as [...] technological advancement, are valuable to an agent because they increase its freedom of action.[22]

Many instrumental goals, such as [...] self-preservation, are valuable to an agent because they increase its freedom of action.[22]

The instrumental convergence thesis, as outlined by philosopher Nick Bostrom, states:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have a wide variety of possible final goals.[3] Note that by Bostrom's orthogonality thesis,[3] final goals of highly intelligent agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded instrumental goals.[26]

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.[22]

Some observers, such as Skype's Jaan Tallinn and physicist Max Tegmark, believe that "basic AI drives", and other unintended consequences of superintelligent AI programmed by well-meaning programmers, could pose a significant threat to human survival, especially if an "intelligence explosion" abruptly occurs due to recursive self-improvement. Since nobody knows how to predict when superintelligence will arrive, such observers call for research into friendly artificial intelligence as a possible way to mitigate existential risk from artificial general intelligence.[27]

Read the original post:

Instrumental convergence - Wikipedia

Tool Can Detect Ancient ‘Bio-Residues’ on Earth And Beyond – Big Island Now

An innovative tool developed by University of Hawaii at Mnoa researchers could be a critical part of future NASA missions to detect life existing or extinct on Earth and other planets.

The instrument, called a Compact Color Biofinder, uses specialized cameras to scan large areas for fluorescence signals of biological materials such as amino acids, fossils, sedimentary rocks, plants, microbes, proteins and lipids. It has been used to detect these bio-residues in fish fossils from the Green River rock formation in Colorado, Wyoming and Utah.

The findings are published in Nature Scientific Reports.

The Biofinder is the first system of its kind, Anupam Misra, lead instrument developer and researcher at the Hawaii Institute of Geophysics and Planetology at UH-Mnoas School of Ocean and Earth Science and Technology, said in a press release. At present, there is no other equipment that can detect minute amounts of bio-residue on a rock during the daytime. Additional strengths of the Biofinder are that it works from a distance of several meters, takes video and can quickly scan a large area.

Misra and his colleagues are applying for an opportunity to send the Biofinder on a future NASA mission. The search for life in other places in the galaxy is a major goal of exploration missions conducted by NASA and other international space agencies.

If the Biofinder were mounted on a rover on Mars or another planet, we would be able to rapidly scan large areas quickly to detect evidence of past life, even if the organism was small, not easy to see with our eyes and dead for many millions of years, Misra said in the press release. We anticipate that fluorescence imaging will be critical in future NASA missions to detect organics and the existence of life on other planetary bodies.

The Biofinders capabilities would be critical for the accurate and non-invasive detection of contaminants such as microbes or extraterrestrial biohazards to or from Earth, according to Sonia Rowley, the team biologist and co-author on the study.

The detection of such biomarkers would constitute groundbreaking evidence for life outside of planet Earth, Misra said in the press release.

Finding evidence of biological residue in a vast planetary landscape is an enormous challenge. The team tested the Biofinders detection abilities on the ancient fish fossils in the Green River formation and corroborated the results through laboratory spectroscopy analysis, scanning electron microscopy and fluorescence lifetime imaging microscopy.

There are some unknowns regarding how quickly bio-residues are replaced by minerals in the fossilization process, Misra said in the press release. However, our findings confirm once more that biological residues can survive millions of years, and that using biofluorescence imaging effectively detects these trace residues in real time.

Excerpt from:

Tool Can Detect Ancient 'Bio-Residues' on Earth And Beyond - Big Island Now

Ayn Rand – Books, Quotes & Philosophy – Biography

Who Was Ayn Rand?

Ayn Rand moved to the United States in 1926 and tried to establish herself in Hollywood. Her first novel, We the Living (1936), championed her rejection of collectivist values in favor of individual self interest, a belief that became more explicit with her subsequent novels The Fountainhead (1943) and Atlas Shrugged (1957). Following the immense success of the latter, Rand promoted her philosophy of Objectivism through courses, lectures and literature.

Ayn Rand was born Alissa Zinovievna Rosenbaum on February 2, 1905, in St. Petersburg, Russia. The oldest daughter of Jewish parents (and eventually an avowed atheist), she spent her early years in comfort thanks to her dad's success as a pharmacist, proving a brilliant student.

In 1917, her father's shop was suddenly seized by Bolshevik soldiers, forcing the family to resume life in poverty in the Crimea. The situation profoundly impacted young Alissa, who developed strong feelings toward government intrusion into individual livelihood. She returned to her city of birth to attend the University of Petrograd, graduating in 1924, and then enrolled at the State Institute for Cinema Arts to study screenwriting.

Granted a visa to visit relatives in Chicago, Alissa left for the United States in early 1926, never to look back. She took on her soon-to-be-famous pen name and, after a few months in Chicago, moved to Hollywood to become a screenwriter.

Following a chance encounter with Hollywood titan Cecil B. DeMille, Rand became an extra on the set of his 1927 film The King of Kings, where she met actor Frank O'Connor. They married in 1929, and she became an American citizen in 1931.

Rand landed a job as a clerk at RKO Pictures, eventually rising to head of the wardrobe department, and continued developing her craft as a writer. In 1932, she sold her screenplay Red Pawn, a Soviet romantic thriller, to Universal Studios. She soon completed a courtroom drama called Penthouse Legend, which featured the gimmick of audience members serving as the jury. In late 1934, Rand and her husband moved to New York City for its production, now renamed Night of January 16th.

Around this time, Rand also completed her first novel, We the Living. Published in 1936 after several rejections, We the Living championed the moral authority of the individual through its heroine's battles with a Soviet totalitarian state. Rand followed with the novella Anthem (1938), about a future collectivist dystopia in which "I" has been stamped out of the language.

In 1937, Rand began researching a new novel by working for New York architect Ely Jacques Kahn. The result, after years of writing and more rejections, was The Fountainhead. Underscoring Rands individualistic underpinnings, the books hero, architect Howard Roark, refuses to adhere to conventions, going so far as to blowing up one of his own creations. While not an immediate success, The Fountainhead eventually achieved strong sales, and at the end of the decade became a feature film, with Gary Cooper in the role of Roark.

Scroll to Continue

Rand's ideas became even more explicit with the 1957 publication of Atlas Shrugged. A massive work of more than 1,000 pages, Atlas Shrugged portrays a future in which leading industrialists drop out of a collectivist society that exploits their talents, culminating with a notoriously lengthy speech by protagonist John Galt. The novel drew some harsh reviews, but became an immediate best seller.

Around 1950, Rand met with a college student named Nathan Blumenthal, who changed his name to Nathaniel Branden and became the author's designated heir. Along with his wife, Barbara, Braden formed a group that met at Rand's apartment to engage in intellectual discussions. The group, which included future Federal Reserve Chairman Alan Greenspan, called itself the Collective, or the Class of '43 (the publication year of The Fountainhead).

Rand soon honed her philosophy of what she termed "Objectivism": a belief in a concrete reality, from which individuals can discern existing truths, and the ultimate moral value of the pursuit of self interest. The development of this system essentially ended her career as a novelist: In 1958, the Nathaniel Branden Institute formed to spread her message through lectures, courses and literature, and in 1962, the author and her top disciple launched The Objectivist Newsletter. Her books during this period, including For the New Intellectual (1961) and Capitalism: The Unknown Ideal (1966), were primarily comprised of previously published essays and other works.

Following a public split with Braden, the author published The Romantic Manifesto (1969), a series of essays on the cultural importance of art, and repackaged her newsletter as The Ayn Rand Letter. She continued traveling to give lectures, though she was slowed by an operation for lung cancer. In 1979, she published a collection of articles in Introduction to Objectivist Epistemology, which included an essay from protg Leonard Peikoff.

Rand was working on a television adaptation of Atlas Shrugged when she died of heart failure at her home in New York City on March 6, 1982.

Although she weathered criticism for her perceived literary shortcomings and philosophical arguments, Rand undeniably left her mark on the Western culture she embraced. In 1985, Peikoff founded the Ayn Rand Institute to continue her teachings. The following year, Braden's ex-wife, Barbara, published a tell-all memoir, The Passion of Ayn Rand, which later was made into a movie starring Helen Mirren.

Interest in Rand's works resurfaced alongside the rise of the Tea Party movement during President Barack Obama's administration, with leading political proponents like Rand Paul and Ted Cruz proclaiming their admiration for the author. In 2010, the Ayn Rand Institute announced that more than 500,000 copies of Atlas Shrugged had been sold the previous year.

In 2017,Tony-winning director Ivo van Hove reintroduced The Fountainhead to the American public with a production at the Brooklyn Academy of Music. Having originated at Toneelgroep Amsterdam in the Netherlands, van Hove's version featured his performers speaking in Dutch, with their words projected onto a screen in English.

Go here to see the original:

Ayn Rand - Books, Quotes & Philosophy - Biography

Top 10 Reasons Ayn Rand was Dead Wrong – CBS News

RELATED POSTS:

Objectivism is important to sales professionals because it's the kind of philosophy that, if you believe in it, you're going to screw up your ability to sell effectively. As a profession, Sales has moved beyond the attempt to manipulate people selfishly for one's own ends, which is how Objectivism plays itself out in the real world.

Most successful sales professionals feel that they are in service to something greater than themselves. Unfortunately, that's not a belief that often shared by their top management, as pointed out in the BNET blog post "Why Do CEOs (Still) Love Ayn Rand." That post summarized Objectivism as:

As a bonus, we won't be forced any longer to listen to newly minted Rand fanboys drone on and on and on and on about how much more enlightened they are than the rest of us hoi-polloi. Puleeze! (eye roll)

NOTE: If you want an example of the kind of behavior you can expect from Rand-influenced CEOs (as well as other assorted follies) check out these posts:

Trending News

Here is the original post:

Top 10 Reasons Ayn Rand was Dead Wrong - CBS News

Lions Drinking With Jackals: Molly Tanzer’s Grave-Worms – tor.com

Welcome back to Reading the Weird, in which we get girl cooties all over weird fiction, cosmic horror, and Lovecraftianafrom its historical roots through its most recent branches.

This week, we cover Molly Tanzers Grave-Worms, first published in the Joseph Pulvers 2015 Cassildas Song anthology. Spoilers ahead!

To desire is to live, and to live is to desire.

Docia Calderan ambitious mogul with a penchant for making suits look wholly femininemeets Roy Irving at a mayoral fund-raiser where only they oppose a new courthouse statue. What do lions drinking with jackals have to do with Justice? They discuss joint business ventures over dinner at Delmonicos while the pheromones fly. Yet the restaurants emptiness disturbs her. Lately shes noticed a strange lethargy in New York, with few people braving the streets. The pall extends to her enjoyment of Delmonicos normally excellent fare. Does Roy sense the change?

Have you found the Yellow Sign? Roy responds with a shrug. Its a catchphrase on everyones lips. Nobody knows why people say it. To Docia, it feels like shutting the curtains, locking the doorgoing to sleep.

Outside, clouds obscure stars and moon. It strikes Docia that the city lights are stars, the skyscrapers galaxies. But human will made New York, and nothing can break the citys spirit. A little tipsy, she stumbles. Roy offers to drive her home. Whose home? is her careless reply. He laughs like a living god, and Docia falls into his arms without any fear at all.

So their affair and business partnership begins. Captains of industry, they both want more, always more. But shes not thrilled when he asks her to a cocktail party hosted by theater critic Fulvius Elbreth. Elbreth approved the justice statue, and has crazy ideas about how kings would be better for America than corporate-backed politicians. But Roy insists that the cost of doing business is association with disagreeable powerbrokers.

Party-bound, Docia feels the citys darker than usual. Roy notices nothing amiss. Elbreths apartments full of self-proclaimed intellectuals. The critic is in on every conversation, doling out pithy bons mots. Docia overhears him crowning abstraction as the only acceptable form of modern artistic expression. Representational art is pure arrogance, Elbreth explains, because nothing is knowable enough to represent. Docia argues. Elbreth glibly twists her words, and she escapes to the balcony. Another womans there, smoking. Docia politely nods, then stares at the oddly dim city and cloud-masked sky. When was the last time she saw stars?

Dont let them bother you, the woman says in a clipped, aristocratic accent. Her tailored suit and expression of intense determination impress Docia. Docia, the woman says, is a creator. Critics are destroyersno, less, for they lack will. Theyre grave-worms, feasting on whats already dead.

Though unnerved by the womans familiarity, Docia accepts the most delicious cigarette shes ever smoked. She asks the woman if she senses the gathering darkness. It is darker, the woman says, but as for why: Have you found the Yellow Sign?

The woman vanishes as Elbreth comes out to apologize. Though they differ, Docias opinions on art intrigue him, and hed like to invite her to attend a play, one with a blinkered history thats banned in Europe. Docia agrees to the not-dateElbreth knows shes seeing that meathead Irving.

Docia examines the perfect cigarette butt for a brandmark, and finds a strange golden insignia. She pockets the butt to show to a tobacconist. When Roy hears about Docias not-date, he angrily dumps her. She shrugs off the rejection, more interested in the insignia. Have you found the Yellow Sign?

The tobacconist cant identify the stub mark. Moreover, he doesnt want to find out what it means, and she should take it away! Docias not-date with Elbreth starts out pleasantly. The first act of the play isnt the diatribe Docia expected, but poetry and action more confounding than alarming. Elbreth, however, emerges for intermission pale and sweaty. Somethings wrong, he says. He has to go; Docias willingness to stay makes him flee without hat or coat.

She sits through the remaining acts riveted, entranced. The plays not one of Elbreths abstractions, but more real than anything shes experienced before. She seems to exit the theater alone. The city is silent and dark, but the clouds have dispersed, and the night sky greets her with black stars brighter than any artificial, earthly light and uncounted moons. The constellations are foreign, but Docia laughs. Shes lost her whole life, and finally found her way.

The balcony woman appears, leaning on a streetlight, her suit looking like priestly vestments. Did Docia like the play, she asks, the flash of her yellow eyes blinding. Docia thinks so.

Youre not someone who appreciates uncertainties, the woman says. Lets have a cigarette and talk about it. Docia accepts. Content with silence, she exhales smoke through which she sees that the strange gold insignia is even brighter than the ember.

Whats Cyclopean: Docias fond of straightforward similes: invitations like poisonous snakes, robes crumpled like flowers after a rainstorm, witticisms as light and frothy as egg white on a Ramos Gin Fizz. Her first exposure to the sign moves her to less marked metaphors: eyes as starless pools, starless skies as clotted. The play itself brings her to direct, effusive description: swirling constellations and radiance undreamed. And then to silence.

The Degenerate Dutch: Roy plays at sexism with Docia, or maybe hes not playing. Its all part of being businessmenforgive me, business-people.

Weirdbuilding: We all know the title on that theatrical handbill. And the sign on that cigarette.

Libronomicon: Critic Elbreth, despite his fondness for abstract art, also enjoys political and theatrical classics: he uses a review of Hamlet to advocate for American monarchy. There are probably easier contexts in which to do that, but you do you.

The King in Yellow, meanwhile, reminds Docia of Antigone.

Madness Takes Its Toll: Hearing about the yellow sign, at first, makes Docia feel like lying down shutting the curtains going to sleep. And it does, indeed, seem to spread a pall of apathy and depression over New York.

Ruthannas Commentary

Have you seen the yellow sign? And if youve seen it, do you have any clue what it means?

In Chamberss original, the play and the sign bring both madness and their own reality, the ambiguity never resolved. Laws comes down on the own reality side, with the plays readers immanentizing the future of Repairer of Reputation into (and then out of) existence. Walterss Black Stars on Canvas makes Carcosa a source of poetic madness and inspiration, while Geist does nothing so linear in translating it to gonzo rock opera. Its a force of destruction and change, creativity and illusionand where the emphasis falls among those four depends on the story.

My previous experience with Tanzer was the delightfully decadent Creatures of Will and Temper, so I went into this story expecting lush sensory detail and Walters-ish artistic sacrifices. I got the lush detail, for sure, as Docia appreciates both her appetites and the things that feed them. But shes no artist: she sees desire as fuel for the ultimate appetite of capitalism. Ironically, given her artistic preferences, those appetites remain abstract. She and Roy are captains of industry, better than kings, and thats all we learn of their business efforts. They share a love of good food and a preference for representational art. And at the storys outset, neither of them has seen the yellow sign.

Theyre growing unusual in that ignorance, though. Our first hint about the role of all things yellow is a disturbing change to the City That Never Sleeps. New York grown quieter, duller, starless even by comparison with its usual light pollution, is a worrisome imagethe moreso now, having seen how much and how little a pandemic lockdown does to the citys spirit.

Carcosa takes at least two forms here. First, theres the gold-sigiled cigarette that leaves all other cigarettes tasting ashen. This seems fully in keeping with the effect on the city: a force for sapping vitality. But maybe its more complicated than that. Because the signs second form is the play itself. And at least for Docia, the play pulls her into another reality entirely, one with all the passion and pleasure thats fading from her original world.

So is the sign replacing reality with delusion? Is it vampirizing our worlds energy and light to keep Carcosa alive, or to bring it into being? Is there only one world, experienced differently by those who have and havent encountered the transformative power of yellow?

Fulvius Elbreth recognizes the play as dangerousenough to flee in the teeth of a review deadline. But we already know hes dubious about realism, preferring abstraction to the lies of meaning. He speaks for the gospel of cosmic horror: that rationality is irrational and human-scale understanding an illusion. Maybe this inocculates him against the plays parasitic certaintyor maybe it keeps him from appreciating truth when he encounters it.

What about the unnamed harbinger of Carcosa? (Ill call her Cassilda.) Maybe shes priming people for the play with her perfect cigarettes. Or maybe shes spreading her worlds reality through a thousand different yellow-signed experiences, a thousand flavors of fairy food and drink and drug to leave users dissatisfied with everything but the flash of her yellow eyes.

And shes the one who drops the storys title. She accuses critics, Elbreth in particular, of being grave-worms who feast on that which is already dead. When you think about it, thats an awfully judgmental way to describe someone who evaluates art. Elbreth is no Pierce, living only to describe fault in the most vicious way possible. Indeed, Docias original issue is with the art he likes.

It seems to me that Cassildas accusation carries a sinister implication: that the art of this world is already dead. That Elbreth is stuck with beautiful things that are only growing dimmerthings that Cassilda herself is working to destroy.

Which means that Carcosa, too, is feasting on the dead. And that for all their pleasure and intensity, the cigarettes and the infamous play are the real grave-worms.

Annes Commentary

Any worthwhile afterlife must host a coffeehouse frequented by artists of every era and ilk. When the place gets overcrowded, the oddest couples may share tables. There, way in the back, between the rack of coffee-stained newspapers and the shelf of donated books, Im spotting Robert W. Chambers with

Ayn Rand?

Yes, Ayn Rand. Theres no mistaking that sensible, side-parted bob and those eyes expressive of intense determination, a single-mindedness of purpose. The ashtray in front of her is full of stubs, the brandmark of which I cant make out from the land of the living. And yes, the celestial coffeehouse allows smoking; all the patrons being dead, the management figures what harm can it do.

The ethereal vibrations of Chambers and Rands interaction must have reached Molly Tanzer, whose Grave-Worms resembles a collision between The King in Yellow and Atlas Shrugged. That is, what would have happened if Dagney Taggart found hearts home not in Galts Gulch but in Lost Carcosa?

I picked up Randian vibes in Tanzers first paragraph, which in describing Docia Calder echoes Rands descriptions of both Dagney and The Fountainheads Dominique Francon. Roy Irving comes along to represent business tycoon Hank Reardon; later we get Fountainheads architectural critic Ellsworth Toohey in theater critic Fulvius Elbreth. Fulvous refers to a range of colors from yellow-brown to tawny to dull orange a Fulvius cannot rival the real-gold yellow of Balcony-Womans cigarette insignia, any more than Ellsworth Toohey can rival Rands hypermasculine heroes.

Along with hints from fashion, hairstyles, and the pervasive cigarette puffing, Docia and Roys date at Delmonicos sets the period of the story in the mid-twentieth century, paralleling the felt period of Atlas Shrugged; the midcentury incarnation of Delmonicos was where the elite met to chow down on the signature steaks, Lobster Newberg and Baked Alaska. Thematically more important is the atmospheric similarity of Tanzer and Rands New Yorks, languishing in the grip of failing vitality and a general emotional/spiritual malaise. People express their foreboding with catchphrases of unknown origin, though their true meanings will be crucial to the story. Atlas opens with Who is John Galt? Roy carelessly throws out the question Docia detests: Have you found the Yellow Sign?

Maybe the Yellow Sign makes Docia think of the yellow peril, that Western fear that the barbarian hordes of Asia were poised to destroy the white mans superior culture. Not that all whites are dependable. In Atlas and Grave-Worms a major threat to our way of living is the spread of Socialism even in Europe. Docia assumes that Elbreths play is banned there for anti-Socialist sentiments that would offend the delicate sensibilities of those snooty soap-dodgers.

At the heart of Dagny Taggarts and Docias disgust with modern philosophy is its rejection of reason and its elevation of the subjective over the objective. To accept with Fulvius Elbreth that only in abstraction can we truly show reality is a moral as well as an intellectual sin. Maybe Elbreth can slither by (wormlike) by suggesting he applies his principles to Art, not reality. Balcony Woman doesnt buy it. To her, Docia is Rands epitome of humankind, the Creator, the independent thinker and doer for whom justice is fair exchange for value, with money as the most objective indicator of approval anyone can give another human. Whereas Elbreth the critic is a lower-case destroyer, a grave-worm able to feast only on whats dead.

Which implies that to feast on a living thing, Elbreth and kin must first kill it.

Tanzers most telling reference to Atlas Shrugged lies in how Docia receives the emblem of upper-case Reality in the form of a cigarette brandmark. Searching for John Galt, Dagny Taggart happens upon philosopher Hugh Akston, the last champion of Reason, whos left academia to run an obscure mountain diner. He gives Dagny the best cigarette shes ever tasted; later shell notice that the stub is branded with a golden dollar sign. Sadly, her tobacconist friend is unable to discover the cigarettes origin; his sincere opinion is that it comes from nowhere on this Earth! The golden dollar sign turns out to be the emblem of Galts Gulch and its inhabitants, the stalwarts of objectivism.

Docias mark turns out to be the Yellow Sign, emblem of Carcosa and the King in Yellow. The King in Grave-Worms takes the curious form of Balcony Woman who, when revealed under black stars and radiant moons, may be Docia idealized, a woman who wears her suit so well it resembles priests vestments or royal robes of state.

Whats it all mean, this fusion of Chambers and Rand into Tanzer? Whos John Galt, and how about that Yellow Signfound it yet? I guess Galt represents the Real on Earth, whereas the Sign leads beyond Earth into an Ultimate Reality in which Docia can finally feel really right and really content and smoke only the really best without health repercussions, forever.

So one of Cassildas happier endings?

Is it?

[ETA: This is what I get for avoiding Atlas Shrugged! But put our analyses together, and I think you get a really interesting critique of Randian objectivism. Or just capitalism. RE]

Next week, we continue N. K. Jemisins The City We Became with the 2nd Interruption and Chapter 4. Maybe Aislyn will meet someone more trustworthy? But probably not trust them

Ruthanna Emryss A Half-Built Garden comes out July 26th. She is also the author of the Innsmouth Legacy series, including Winter Tide and Deep Roots. You can find some of her fiction, weird and otherwise, on Tor.com, most recently The Word of Flesh and Soul. Ruthanna is online on Twitter and Patreon, and offline in a mysterious manor house with her large, chaotic, multi-species household outside Washington DC.

Anne M. Pillsworths short story The Madonna of the Abattoir appears on Tor.com. Her young adult Mythos novel, Summoned, is available from Tor Teen along with sequel Fathomless. She lives in Edgewood, a Victorian trolley car suburb of Providence, Rhode Island, uncomfortably near Joseph Curwens underground laboratory.

The rest is here:

Lions Drinking With Jackals: Molly Tanzer's Grave-Worms - tor.com

Heres hoping this USC/UCLA-Big Ten merger careens off the track, crashes and burns | Jones – PennLive

I rose on Saturday morning with the intent to analyze how this insane Big Ten expansion into Los Angeles could be rationalized and developed into a viable model.

But, the more I consider it, the more I want to see the entire Big Ten acquisition enterprise led by FOX Sports collapse and explode in a massive fireball. And so, I have built a scenario for that vision, one I can root for.

First, an updated review of the landscape as of Saturday night. Several reporters I trust have posted recent key developments:

Dennis Dodd of CBS Sports reported that Oregon and Washington have been informed that the Big Ten is sitting tight for now and is waiting on Notre Dame to decide whether it also will join the B1G.

Scott Dochterman of The Athletic inferred quite logically that FOX is not at all interested in a new B1G West division that would impede USC and UCLA from playing Big Ten East powers Ohio State, Michigan and Penn State more than once every four years. The TV folks want the big-ratings splash games as often as possible.

Influential Portland radio host John Canzano, formerly of OregonLive, wrote an open plea to 84-year-old Nike founder and generous Oregon Ducks benefactor Phil Knight to save whats left of the Pac-12 by encouraging a proactive raid of the Big 12, surmising quite logically that its one conference or the other that will survive. I say, maybe neither, but thats another story.

Sports Illustrateds Pat Forde suggested that, if were really in uncharted fittest-survival mode, whats to stop these budding super-conferences from jettisoning unwanted baggage, schools that have never pulled their weight financially like Purdue and Minnesota and Vanderbilt and Mississippi State?

Its chaos out there, every league and school for itself. There are no rules anymore. Its an Ayn Rand biosphere. And thanks to the body snatching of the SEC and B1G by Disney and FOX and hatching of its larvae inside college football, only one asset matters. Its not tradition or collegiality or rivalries or bands or any of the other facets weve always loved about college sports.

Its about money. Only money.

Here are the architects of this takeover. Take a good look at them. They are most responsible for whats happened to your campus games. Jimmy Pitaro, president of ESPN. Mark Silverman, president of FOX Sports. The former greased the skids for Texas and Oklahoma joining the Southeastern Conference and leaving the Big 12 in ruin. The latter headed up Southern California and UCLA bolting for the Big Ten and leaving the Pac-12 adrift and very likely irrelevant.

ESPN president Jimmy Pitaro (left) and FOX Sports president Mark Silverman (right).ESPN/FOX Sports

This super-league concept is what the money men of European soccer tried to do to the Premier League. It didnt happen because the fans there who love the game even many who wear powerful colors of Chelsea blue and Manchester red stuck up for the smaller poorer clubs of the 20 and shouted down the concept.

But something of that sentiment is missing in the United States. Because we are such a divided nation in so many ways, nobody seems to care about the welfare of anyone else. Get yours, make certain your fence line is secure and screw your neighbor! These are now the bromides of college sports.

I dont sense that many fans of the B1G or SEC care in the slightest that the landscape beyond their ever-growing footprints is being scoured. That the plucky underdogs of Iowa State and Oregon State and, yes, now Pittsburgh, are vulnerable to being washed away by a tidal wave of unchecked greed.

My hope is this: That the incompetent Peter Principle leadership of Kevin Warren is exposed again. That this FOX money grab will blow up in the Big Tens face when all the athletes at USC and UCLA begin to realize what theyve been lassoed into: 12-hour-minimum weekly excursions in planes to and from slate-colored, stratus-clouded upper Midwestern and Eastern outposts. I am hoping for an out-and-out rebellion, that recruiting craters at the Los Angeles schools because word of mouth spreads you dont wanna do this, man. Its brutal.

These are two regional cultures that do not belong together. They are both beautiful in their own rights the diligent windswept plainsmen of the Cornbelt and the flamboyant sun-kissed sons of SoCal. They have zero in common. And what the profiteers of telecasting dont get is, a league must have a culture in common. Its not merely a division of professionals. Its a bunch of college kids still representing a specific slice of the country.

I have a feeling this is not going to work. The network execs have no such premonition because they are soulless opportunists who only understand fiscal metrics. They are merely attempting to group the most valuable assets in the belief that they can maximize profit.

Well then, call me a spread bettor looking for a loss. I think this USC/UCLA acquisition is headed for disaster.

If Warren had the audacity of his predecessor Jim Delany, he wouldve at least gone all in on this western expansion. He wouldve seen the SECs Texas/Oklahoma play with USC/UCLA and raised it with Oregon/Washington/Utah to form an entirely new West division composed fully of Pac-12 members, then made a big swaggering offer to Notre Dame to make 20. If ND refuses, North Carolina is the contingency.

If youre going to do this super-conference move, you do it big and bold and shamelessly. You give the Trojans and Bruins brothers in arms to make them feel included in their new digs. Of course, the FOX boys have already told Warren and the B1G presidents that the Ducks and Huskies and Utes dont provide quite enough bang for the TV buck pro rata, its called in the trade to be worth an equal slice of a diminished pie.

And I dont think Warren has the sort of stones or vision to go against their advice. Hes going to slow play his hand like the donkey he is. And the Big Ten is going to end up with a couple of incongruous ill-fitting members with no Pacific brethren to join them. Its going to suck for them. Actually, its going to suck for everybody.

And so, Im hoping for exactly that a giant failure so dysfunctional and chaotic that it totally implodes. That, by the end of the decade, USC and UCLA are returned in some sort of Western conference along the coast with their neighbors. And the Big Ten recedes back to the cornfields where it belongs.

Thats the thing, though, in this lawless world without borders. Nobody knows who they are anymore. They only care what theyre worth.

More PennLive sports coverage:

Reasons for and residue of USC/UCLA bolt to Big Ten have me conflicted at best, depressed at worst.

Had he been given a chance, Marlin Briscoe could have been a great pro QB, but he was born too soon.

Off Topic: How a PSU academic advisor inspired Wally Richardson 30 years ago to maximize his words.

Read the original here:

Heres hoping this USC/UCLA-Big Ten merger careens off the track, crashes and burns | Jones - PennLive

Ken Griffin spent $54 million to fight tax hike on the rich. Secret IRS data shows it’s paying off – Salon

For billionaire Ken Griffin, it was well worth spending $54 million to ensure he and other rich Illinoisans wouldn't have to pay more tax.

By the time Illinois voters streamed into voting booths on Election Day in 2020, Griffin, then Illinois' wealthiest resident, had made sure they'd heard plenty about why they should not vote to raise taxes on him and the state's other rich people. His tens of millions paid for an unrelenting stream of ads and flyers against an initiative on that year's ballot, which would have allowed Illinois lawmakers to join 32 other states in setting higher tax rates for the wealthy than for everyone else.

In the end, Griffin spent about $18 for every one of the 3.1 million votes against the initiative. After initial optimism about its prospects, the measure came up hundreds of thousands of votes short and went down to defeat.

Rarely does the public get a clear view of the payoff for wealthy Americans who put their money down to achieve a political outcome. But in this case, ProPublica's trove of IRS data can provide crucial context for the ballot fight. For Griffin and many of his fellow ultrawealthy Illinoisans, spending even such a vast amount was well worth it when compared with what a tax hike might have cost them.

According to the data, Griffin averaged an annual income of $1.7 billion from 2013 to 2018. That was the fourth-highest in the country, behind only the likes of Bill Gates.

Using that average income as a guideline, the new state tax increase, which aimed to raise the rate from 5% to 8% on the highest incomes, would have cost Griffin around $51 million every year in extra tax. In especially good years in 2018, Griffin reported income of almost $2.9 billion he might have been forced to pay more than $80 million more.

A Citadel spokesperson responding on Griffin's behalf pointed out that, according to ProPublica's previously published data, Griffin paid the second-highest amount of taxes of any American from 2013 to 2018. "Over the past decade," he said in a statement, "it is almost a certainty that Ken has been the largest individual taxpayer in the State of Illinois a state notorious for profligate spending and rampant corruption." Griffin has said he's not against raising taxes; he opposed the measure, he added in his statement, because "Illinois needs to put its fiscal house in order before burdening hard-working families with yet more taxes."

The state's current flat tax rate of 5% is far below the top rates in other large states run by Democrats like California and New York and comparable to those in some Republican-led states like Utah. Advocates for raising the rates on the wealthy in Illinois say the state needs additional revenue, pointing to its regular budget deficits and deep pension debts.

Not all Griffin's political bets pay off. A candidate for Illinois governor he supported with tens of millions of dollars went down to defeat in June's Republican primary. Meanwhile, even though the income tax initiative was defeated, Griffin announced last month that he was moving Citadel's headquarters to Miami and relocating there himself.

Though no other donor to the anti-tax fight came close to matching the tens of millions that Griffin gave, others made contributions that were more than what most Illinois households earn in a year. ProPublica analyzed the tax data of nine other ultrawealthy supporters of Griffin's anti-tax campaign. According to our estimate, this group of heirs and business owners, which includes some of the wealthiest people in Illinois, can expect to see a healthy return on their contributions and save millions in taxes over the coming years.

The math behind our estimate is simple: Wealthy Illinoisans will save about 3% of their income, because that was the size of the proposed tax increase on the wealthy. That's essentially how Illinois' state income taxes work for Illinois residents. With some adjustments, a state tax rate is applied to the income listed on their federal returns. ProPublica contacted all 10 of the anti-tax donors mentioned in this article and the accompanying chart. None challenged the methodology used to estimate their tax savings.

Richard Uihlein, who along with Griffin has emerged as a conservative megadonor on the national stage, pitched in $100,000 to the anti-tax campaign for him a modest amount given his average annual income of $492 million in recent years. Through his family foundation, Uihlein has also given millions of dollars to the Illinois Policy Institute, a small-government group that fought the graduated tax plan. Uihlein's average income would lead to about $15 million of annual tax savings from the defeat of the ballot initiative.

Sam Zell, the real estate mogul known in Chicago for putting together a leveraged buyout of the Tribune Company that preceded its bankruptcy, gave $1.1 million. Based on his recent income, he would save $1.6 million in taxes each year. A spokesperson for Zell declined to comment.

Patrick Ryan made his billions in insurance, and Northwestern University's football stadium and basketball arena bear his family's name, thanks to the hundreds of millions he's given the school. He gave $1 million. His recent income suggests $2.1 million in annual tax savings.

Richard Colburn, whose billionaire family owns the electrical parts maker CED, gave $500,000 to the anti-tax campaign, which would help save him $5.5 million each year in taxes, according to our estimates. In an email message to ProPublica, Colburn said his reasons for opposing the graduated tax were simple: It would have "eaten substantially" into his investment earnings, some of which he passes on to a nonprofit foundation he manages. Like Griffin, he contended the state would not have used the money well.

"Though I enjoy living in the Chicago area, I could save immensely by moving to a lower-tax state, and therefore I 'invested' to limit the temptation on me to relocate," Colburn wrote. "Another element of my 'investment' stems from my desire to limit the mis-spending by the State of Illinois that occurs every time Springfield has extra money." (His full statement is here.)

Donald Wilson, founder of the trading firm DRW, gave $250,000 to the anti-tax campaign. That donation in particular looks modest when weighed against his potential tax savings: Based on Wilson's average annual income of $114 million, the proposed tax increase would have cost him $3.5 million more every year.

Some of the contributions to the anti-tax campaign came from trusts, special legal entities often used by the wealthy to hide or protect assets, as well as to avoid the estate tax. Richard Stephenson, founder of a chain of for-profit hospitals called Cancer Treatment Centers of America, contributed $300,000 through his Celebrate Life Trust. Stephenson is a longtime Republican donor and such an enthusiast of Ayn Rand's message of uncompromising self-interest that he was an executive producer on two movies based on the novel "Atlas Shrugged."

Uihlein, Ryan, Wilson and Stephenson also did not respond to requests for comment.

One $25,000 contribution came from the Philip M. Friedmann Family Charitable Trust. Friedmann made his fortune by selling the greeting card company he co-founded to a private equity firm.

Friedmann's trust, unlike Stephenson's, is a personal foundation. That means Friedmann likely received a tax deduction for donating to his own organization, which then used some of the funds to fight an increase in his taxes.

The contribution to the anti-tax campaign by Friedmann's foundation appears to have violated federal tax law, three nonprofit tax law experts told ProPublica. Personal foundations are prohibited from spending to try to influence legislation, a category that includes contributions to a ballot initiative committee, said Lloyd Hitoshi Mayer, a law professor at Notre Dame. Organizations that break that law are required to pay a penalty of up to 25% of the expenditure in addition to attempting to retrieve the money.

Although this prohibition is spelled out on the IRS' online guide for private foundations, "smaller family foundations don't always know the applicable rules," said Ellen Aprill, a law professor at Loyola Marymount University.

Friedmann did not respond to requests for comment.

Illinois didn't have an income tax of any kind until 1969, when a deal between GOP Gov. Richard Ogilvie and Democratic Chicago Mayor Richard J. Daley resulted in a flat statewide tax of 2.5% on individuals and 4% on corporations. Some Democrats said the tax disproportionately punished low-income families, and pushed for higher rates on the wealthy. But Republicans and other critics argued for expiration dates or rate limits, warning that otherwise lawmakers would simply keep hiking and expanding income taxes. The following year, a compromise was encoded in the state's updated constitution. It clarified that the General Assembly had the power to impose an income tax but only "at a non-graduated rate."

As the state's fiscal problems grew in the following decades, governors and legislators repeatedly raised the flat tax rate until it was up to 5% on individuals. In 2014, multimillionaire private equity investor Bruce Rauner, a Republican backed by Griffin, was elected governor after promising to slash taxes, and the rate was lowered to 3.75%. But as Rauner fell into a bitter standoff with the Democratic-controlled General Assembly, the state went without a budget for more than two years, leaving it in an even deeper financial hole.

The General Assembly, including some Republicans, voted in 2017 to raise the income tax again, to 4.95% on individuals.

Democrat JB Pritzker, a billionaire investor whose family founded the Hyatt hotel chain, launched his campaign for governor by casting himself as a wealthy man who would fight for the middle class and for a graduated tax that was less burdensome for low-income families than the flat-rate system. Rauner vowed to stop him. Their 2018 campaigns spent more than $250 million combined, including $22.5 million that Griffin gave to Rauner, before Pritzker won that November.

With the support of a committed and rich governor, a graduated income tax suddenly seemed possible in Illinois.

"That created a bunch of new momentum," said Ralph Martire, executive director of the Center for Tax and Budget Accountability, a think tank that argued in favor of a graduated income tax. "That was enough political support to really get the grassroots groups working on it."

Outside of a special convention, both the Illinois House and Senate must sign off on a state constitutional amendment by three-fifths majorities. Voters then need to approve it, either by a clear majority of all voters casting ballots in a general election or a three-fifths majority of those voting on the measure itself.

In 2019 the Senate and then the House each met that threshold, passing a measure that would eliminate the graduated income tax ban if voters approved an amendment. Companion legislation laid out what the new tax schedule would be: Rates would either drop or remain at 4.95% for people reporting income up to $250,000; they would climb from there, to a rate of 7.99% on individuals earning above $750,000 and couples above $1 million. The top rate was within the range of those in other Midwest states with graduated systems higher than Missouri's but lower than Iowa's.

Supporters and opponents then had more than a year to make their cases.

Illinois election laws set some limits on campaign donations and spending. But the rules are riddled with loopholes, and they impose no limits on political committees formed to advocate for or against ballot initiatives like the income tax proposal.

Opponents of the graduated income tax formed at least five different campaign committees that raised nearly $63 million altogether. The best funded, by far, was the Coalition to Stop the Proposed Tax Hike Amendment, which collected almost $60 million, including the $54 million from Griffin. The coalition received most of its remaining money from other billionaires and millionaires, according to state campaign donation records.

On the other side, Pritzker created the Vote Yes for Fairness committee, plowing $58 million of his own fortune to support the "fair tax" campaign. Apart from Pritzker's donations, the committee received just one $250 contribution, records show.

Griffin also launched other offensives. In October 2020, the Chicago Tribune reported that Griffin had lambasted Pritzker as "a shameless master of personal tax avoidance" in an email to Citadel's Chicago staff.

The bulk of Pritzker's wealth ($3.6 billion, according to Forbes) is in trusts, some domestic and some located offshore. Pritzker has said some were set up by his grandfather. As ProPublica reported last year, it was common for 20th century patriarchs to set up trusts that passed fortunes down through the generations free of estate taxes.

Pritzker has released his personal tax returns, but has not provided detailed information about the trusts. For 2020, Pritzker's office released returns showing $5.1 million in personal income for the governor and his wife, MK. The domestic trusts benefiting the governor also paid $16.3 million in Illinois taxes and $69.6 million in federal taxes in 2020, according to Pritzker spokesperson Natalie Edelstein.

ProPublica's IRS data does not shed light on those trusts. When ProPublica requested further detail, Edelstein said the governor is not releasing documents concerning the trusts because he "is not the only beneficiary, so he does not have authority to release all of the information." She said that the governor had not personally accepted any disbursements from the offshore trusts, instead giving them to charity. She did not address whether the trusts had been set up to avoid estate taxes, only saying they were "established generations ago."

At the height of the graduated income tax campaign, advertisements for and against the initiative seemed to be everywhere in Illinois in mailboxes, online, all over the airwaves.

"You couldn't even watch TV it was just one ad after another," recalled David Merriman, a public administration professor at the University of Illinois Chicago.

Merriman's research had found that Illinois received less revenue from income taxes and placed a higher tax burden on low-income taxpayers than neighboring states with graduated systems, including states led by Republicans. But, perhaps predictably, the ads largely avoided policy discussions in favor of political appeals.

"At the worst possible time, Springfield politicians are pushing a constitutional amendment that would give them new powers to make it easier to raise taxes on all Illinois taxpayers," a narrator in one anti-tax ad declared. "And if there's one thing we know about Springfield politicians, it's that you can't trust them."

The fair-tax campaign accused the rich of trying to fool middle-class families and claimed, based on the state Senate bill that had already passed, that as many as 97% of taxpayers would pay the same or less under the governor's plan.

But voters weren't convinced. Federal investigations of several Chicago and state politicians were making headlines, and Merriman said the graduated tax advocates failed to persuade voters that they would benefit from the amendment. The initiative failed by a vote of 53% to 47%.

"It showed just how distrustful everyone is of the government," he said.

The big money battle has continued in the Illinois governor's race this year. This January, Pritzker deposited $90 million into his own reelection fund the largest single political contribution in Illinois in decades and probably ever. Under state election law, candidates can lift donation limits in a race by funding their own campaigns.

Several of the anti-tax funders contributed large sums to Republicans aiming to unseat Pritzker this fall. Once again, Griffin led the way, spending $50 million, but his handpicked candidate lost the GOP primary last week to Darren Bailey, a right-wing state senator propelled by more than $17 million Uihlein gave to his campaign and an aligned super PAC. Pritzker and the Democratic Governors Association also went head-to-head with Griffin, paying for ads attacking his candidate, Richard Irvin.

Bailey received an endorsement from Donald Trump the weekend before the election and finished with about 58% of the vote. Irvin faded to third place with 15%. In his election night victory speech, Bailey ripped Pritzker as an "out-of-touch, elitist billionaire."

"Do you feel overtaxed?" Bailey called out to his supporters. Their response: "Yeah!"

By then, Griffin had made a big announcement that meant his state tax bill would plummet.

In a letter to Citadel employees, Griffin announced that he was moving the company's headquarters to Miami and that he himself had already moved his family to the area.

Florida does not have a personal income tax. Experts told ProPublica Griffin will still pay some personal income tax in New York and Illinois since Citadel has offices there. But his bill is sure to shrink dramatically, likely saving him tens of millions a year.

In response to ProPublica's questions, Citadel did not address whether taxes motivated his move. Instead, in its statement the spokesperson cited crime concerns as the prime motivator: "Ken left Illinois for a simple reason: the state is devolving into anarchy. Senseless violence is now part of daily life in Chicago."

Griffin's letter to Citadel staff also made no mention of taxes as being a reason for the move. Instead, it rhapsodized about how Miami "embodies the American Dream embracing the possibilities of what can be achieved by a community working to build a future together."

Original post:

Ken Griffin spent $54 million to fight tax hike on the rich. Secret IRS data shows it's paying off - Salon

LETTER: Thank cadre deployment and BEE for the mess – BusinessLIVE

When you see that in order to produce, you need to obtain permission from men who produce nothing; when you see that money is flowing to those who deal not in goods but in favours; when you see that men get richer by graft and by pull than by work; and your laws dont protect you against them but protect them against you; when you see corruption being rewarded and honesty becoming a self-sacrifice; you may know your society is doomed. Ayn Rand, Atlas Shrugged, 1957.

Stage 6 load-shedding has been forced on us by workers who are entitled, overpaid, underworked but have seen power abused everywhere else and have taken their monopoly position to extort an unaffordable increase. Never in the pre-1994 history of Eskom (formed in 1928, 60 years ago) did this happen. What changed? The only significant change was government policy cadre deployment, affirmative action and BEE.

Eskom has 10,000 too many employees; Transnet, the company that barely runs trains, employs 45,000 people. Most of these employees produce very little, a lot of them dont go to work, and when they do, they are disruptive and poorly managed by people who, like our president, cannot make a hard decision. They walk all over everything, and when things dont go their way, they throw their power around, because they can.

The leadership, institutional knowledge, work ethic and skills that carried these organisations has gone, retired or where made unwelcome just left. The ANC government has cut itself off from the skills that could help it implement its stupid policies, and now we all suffer.

I dont see private companies acting like this. Why? Because they have to compete and produce to survive.

Rob TiffinCape Town

JOIN THE DISCUSSION: Send us an email with your comments to letters@businesslive.co.za. Letters of more than 300 words will be edited for length. Anonymous correspondence will not be published. Writers should include a daytime telephone number.

See the original post:

LETTER: Thank cadre deployment and BEE for the mess - BusinessLIVE

Why The Racist Left Smears Clarence Thomas As An ‘Angry Black Man’ – The Federalist

Supreme Court Justice Clarence Thomas is a person of grievance harboring resentment, [and] anger, reported no less an authority than Hillary Clinton during an appearance last week on CBS. In ignoring Thomass ideas to smear his temperament, Clinton pulled from the same playbook leftists have been using against Thomas since even before his 1991 confirmation hearings.

The New York Times once called Thomas the Supreme Courts youngest and angriest. Times columnist Frank Rich accused him of rage and unreconstructed racial bitterness. His colleague Maureen Dowd has over the years variously described the justice as barking mad, dishonest, and angry, bitter, self-pitying. In the article, Why Is Justice Thomas So Angry?, CNN legal correspondent Jeffrey Toobin concludes, His fulminations are hurtful to the courts mission and reputation.

Forming something of a bitter consensus, his critics exhibit behavior every bit as intriguing as that they claim to condemn.

The best place for insight into Thomass anger is with the man himself. In his autobiography, My Grandfathers Son, Thomas says his bouts with anger in early life were at their most intense when, during his college years, he grew drunk with revolutionary rhetoric.

Black-nationalist ideas didnt suit him long, however. As his life evolved, so did his thinking. The hostility he once directed at a racist American society for mistreating blacks found new targets.

His personal anger can be interpreted in the context of its inverse relationship with happiness. Alongside his brother, Thomas was raised by their grandfather, Myers Anderson, whom, taking after their mom, they called Daddy. A self-employed deliveryman and farmer with an inexhaustible work ethic, Thomas portrays Anderson as akin to a drill instructor.

As the price of their shelter, the two boys labored so intensively in maintaining the farm that Clarence once reminded his grandfather slavery had ended. Not in my house, Anderson answers. Without hard work, self-reliance was impossible, Anderson taught the boys, and only through self-reliance can men earn their freedom. That was to be his gift to them.

He knew that to be truly free and participate fully in American life, Thomas writes, poor blacks had to have the tools to do for themselves. Very few would argue that, absent this individual liberty, personal happiness is even possible.

Thomas credits self-reliance for his success as a student. Raised Roman Catholic, his elementary school years were spent at St. John Vianney Minor Seminary, where he excelled athletically and academically. With plans of becoming a priest, he left Georgia to attend high school at Immaculate Conception Seminary in Missouri.

After a change of heart, he returned to Georgia, where one of his grammar-school nuns persuaded him to apply to Holy Cross. Before heading back north to attend college, a friend introduced him to The Communist Manifesto. This introduction to Marx soon blossomed into something else.

As a child Thomas had been taught that a mans life is his own responsibility, but according to Marxist theories of racial oppression, progress comes through revolution. To black nationalist Marxists, white racism explained every problem, Thomas says. It was the trump card that won every argument. He co-founded the Black Student Union, a leftist group whose advocacy included anti-Vietnam protesting.

At one BSU rally, he says that after the crowd worked itself into a frenzy with leftist sloganeering, We drank our way to Harvard Square, where our disorderly parade deteriorated into a full-scale riot. It went on through the night. After returning to campus early the next morning, Thomas became horrified: I had let myself be swept up by an angry mob for no good reason other than that I, too, was angry.

In the whirlwind of irrational violence, the BSU students, he realized, had perpetuated an unwelcome stereotype, that of the angry black man. This anger was sanctioned. Thomas describes black students flagrantly violating the student code of conduct and making tall demands, only for the administration to cave every time.

Black students also bonded through black-nationalist politics. Mixing radical politics with the entitlement mentality the administration encouraged quickly proved toxic. Already unprepared for living among whites, Thomas says, many of these unprepared black students gave up class in favor of drugs and cultlike Eastern religions. Others dropped or failed out.

In his senior year, Thomas read the uber-individualist books of Ayn Rand and began questioning the groupthink of his black peers. But to embark on free thinking meant making enemies of the government, the racists, the activists, the students, even daddy.

Yet free thinking yielded an immediate payoff for his temperament, for he was also being liberated of ideologically imposed passions that universities countenanced: I already knew that the rage with which we lived made it hard for us to think straight. Now I understood for the first time that we were expected to be full of rage. It was our role but I didnt want to play it anymore.

Graduating cum laude in English, Thomas was accepted to Harvard Law, but opted instead for Yale, which he felt was less conservative. Yale was further down the racial-preference road than Holy Cross, which cast suspicion over the entire black student body, as author John Greenya quotes Thomas: You had to prove yourself every day because the presumption was that you were dumb and didnt deserve to be there on merit.

To put his abilities beyond doubt, Thomas eschewed classes on civil rights and constitutional law in favor of corporate, tax, and antitrust law, seeking out professors with a reputation for hostility to blacks, where he strived still. Aspiring a return to his Atlanta-area hometown where an elite law degree could be of service to needy blacks, his plans were frustrated after every application was rejected, and his anger was born anew.

Prospective employers dismissed our grades and diplomas assuming we got both primarily because of preferential treatment, Thomas told the Macon Telegraph. Believing his Ivy League education was overvalued, he affixed a $.15 stamp to his degree, the value of a Yale education when it bore the taint of racial preference.

In The New Yorker, Toobin wonders whether Thomas overplays this notion, asserting perhaps these rejections stemmed from simple racism, the very thing affirmative action was designed to combat. Perhaps. But would an already racist employer be any less skeptical of a black applicant owing to admissions racial preferences?

At Yale, Thomas had worked for the social-services group New Haven Legal Assistance, where he encountered the beneficiaries of government welfare programs. Many of those seeking eligibility feigned poverty and victimization and called for assistance.

Thomas nonetheless believed that as American society condemned blacks to an outlook of scant hope, redressing social imbalances was legitimate government work. Around this time he happened to befriend future U.N. Ambassador John Bolton, who introduced Thomas to a new set of ideas.

In a debate over whether mandating helmets for motorcyclists was meritorious policy Thomas felt accident-related health care costs demanded such a rule Bolton asked him: Clarence, as a member of a group that has been treated shabbily by the majority in this country, why would you want to give the government more power over your personal life?

That stopped me cold, Thomas writes:

I thought of what Daddy had said when I asked him why hed never gone on public assistance. Because it takes away your manhood, he said. You do that and they can ask you questions about your life that are none of their business. They can come into your house when they want to, and they can tell you who else can come and go in your house. Daddy and John, I saw, were making the same point: real freedom meant independence from government intrusion, which in turn meant that you had to take responsibility for your own decisions. When the government assumes that responsibility, it takes away your freedom and wasnt freedom the very thing for which blacks in America were fighting?

Thomass worldview made a prodigal return to the real world. In many eyes, though, this made him a traitor, for it positioned him as an opponent of programs advertised as pro-black.

Thomas was soon recruited by Missouris attorney general, John Danforth, a Yale alum. Danforths Republican affiliation posed a near-crisis of conscience for a man whod recently voted for George McGovern and felt there was no such thing as a self-respecting black Republican.

After being assured of the same treatment as every other staffer, Thomas accepted a job offer, to the derision of his Yale classmates. While the position was intellectually satisfying, its meager salary soon sent him into the private sector, where he encountered the opposite dilemma: satisfying pay but meager opportunities for intellectual challenge.

Thomas then stumbled upon a book review of Thomas Sowells book, Race and Economics, which ended with this passage:

Perhaps the greatest dilemma in the attempts to raise ethnic minority income is that those methods which have historically proved successful self-reliance, work skills, education, business experience are all slow developing, while those methods which are more direct and immediate job quotas, charity, subsidies, preferential treatment tend to undermine self-reliance and pride of achievement in the long run. If the history of American ethnic groups shows anything, it is how large a role has been played by attitudes and particularly attitudes of self-reliance.

Finally, Thomas knew he wasnt alone: I felt like a thirsty man gulping down a glass of cool water. But with newfound confidence came another challenge: Danforth had recently been elected Missouris junior senator and wanted Thomas to join his staff. A job that could be used to benefit other people was appealing, but Thomas knew his heretical thinking would make him a target in scandal-hungry D.C.

Not long into his tenure on the Hill, the Reagan administration asked if Thomas would serve as the assistant secretary for civil rights in the Department of Education. He almost didnt. Washington Post reporter Juan Williams had recently published an article quoting Thomas as asserting welfare ruins blacks, mentioning his sisters experience. The torrent of criticism that followed made him think twice about accepting a prominent executive branch position.

Having felt the lash of public criticism, I questioned whether I had the strength or the courage to stand in the eye of the howling storm that surrounded civil-rights policies, he writes.

He was likewise beset when later nominated to lead the Equal Employment Opportunity Commission. As chairman, Thomas oversaw a massive increase in anti-discrimination litigation, and ideologically driven attacks against his character intensified. These began exacting a toll, especially as a longstanding fondness for alcohol turned into a form of escapism. He even nursed thoughts of suicide, Thomas writes:

I [asked] myself whether I might do better to back away from my political beliefs. Life, I knew, would be so much easier if I went along with whatever was popular. What were my principles really worth to me? As I gazed out my office window at the Potomac River, the answer came instinctively: Theyre worth my life. I spoke the words out loud, knowing at once that they were true.

When Thomas was later offered the nomination for Supreme Court justice, he knew it meant subjecting himself to abuse for not thinking as white senators thought black men should. He says he accepted only out of loyalty to then-President George H.W. Bush: By then Id shed the last of my illusions about white liberals: I knew that their broad-mindedness stopped well short of tolerating blacks who disagreed with them.

The campaign against him featured charges of tax fraud, Confederate sympathies, anti-Semitism, patronizing a cult-like church, and, of course, sexual harassment. These wild allegations obscured the motivation behind the campaign. According to Thomas: I refused to bow to the superior wisdom of the white liberals who thought they knew what was better for blacks. Since I didnt know my place, I had to be put down.

For noting the correlation between welfare services and an entitlement mentality, Thomas has endured beyond-the-pale personal attacks. After defending himself against a Playboy article (Reagan and the Revival of Racism) with a letter to the editor, the articles white author responded: As a Southerner, Mr. Thomas is surely familiar with those chicken-eating preachers who gladly parroted the segregationists line in exchange for a few crumbs from the white mans table. Hes one of the few left in captivity.

Not even civil-rights leaders criticized this racist broadside. What I found inexplicable, Thomas writes, was that so many of the people who went out of their way to tell me how strongly they disapproved of my views seemed to think that the mere act of pointing out the human damage caused by welfare policies was wrong in and of itself. Would they have felt the same way if Id said that I was opposed to drunk driving because my sister had been hit by a drunk driver?

In Grutter v. Bollinger, a 2003 Supreme Court case that upheld the constitutionality of the University of Michigan Law Schools admissions policies that favored some races over others, Thomas issued a dissent imbued with personal experience:

The majority of blacks are admitted to the Law School because of discrimination, and because of this policy all are tarred as undeserving. This problem of stigma does not depend on determinacy as to whether those stigmatized are actually the beneficiaries of racial discrimination. When blacks take positions in the highest places of government, industry, or academia, it is an open question today whether their skin color played a part in their advancement. The question itself is the stigma because either racial discrimination did play a role, in which case the person may be deemed otherwise unqualified, or it did not, in which case asking the question itself unfairly marks those blacks who would succeed without discrimination.

After this decision New York Times columnist Maureen Dowd said Thomass failure to appreciate racial preferences was hypocritical: Its impossible not to be disgusted at someone who could benefit so much from affirmative action and then pull up the ladder after himself. Toobin abused Thomas as a race traitor for his intense resentment of efforts to help African-Americans.

In 2002, five black law professors at the University of North Carolina boycotted a Thomas appearance, claiming: [A]s a justice, he not only engages in acts that harm other African Americans like himself, but also gives aid, comfort, and racial legitimacy to acts and doctrines of others that harm African Americans unlike himself that is, those who have not yet reaped the benefits of civil rights laws, including affirmative action, and who have not yet received the benefits of the white-conservative sponsorships that now empower him.

How could good-faith efforts at furthering blacks progress be met with such derision? Much of it stems from his critics perception of what motivates his opposition to their social-engineering experiments. Toobin, Dowd, and others ascribe this heterodoxy to a perceived servility to powerful conservative elites.

Dowd, imagining herself as Thomas, wrote of his opinion in Bush v. Gore: I used to have grave reservations about working at white institutions, subject to the whims of white superiors. But when Poppys whim was to crown his son one of those privileged Yale legacy types I always resented I had to repay The Man for putting me on the court even though I was neither qualified nor honest But having the power to carjack the presidency and control the fate of the country did give me that old X-rated tingle.

Others interpret Thomas as an ideological devotee to the take-it-as-it-comes judicial philosophy sometimes called originalism a notion hed reject.

A philosophy that is imposed from without instead of arising organically from day-to-day engagement with the law isnt worth having, he writes. Such a philosophy runs the risk of becoming an ideology, and Id spent much of my adult life shying away from abstract ideological theories that served only to obscure the reality of life as its lived.

Still, Thomass Supreme Court career is often blithely dismissed as the work of his ideological puppeteer, Scalia, supposedly because they often vote alike. In fact, according to ABC legal correspondent Jan Crawford Greenburgs book Supreme Conflict: The Inside Story for Control of the United States Supreme Court, it is Scalia who often changed his opinions to more closely reflect Thomass.

In 2005, University of Iowa Law Professor Angela Onwuachi-Willig reported the courts leftist justices were more likely to vote alike than Thomas and Scalia did, with Justice Ginsburg agreeing in full with Justice Souter 85% of the time, Justice Souter agreeing with Justice Stevens 77% of the time, and Justice OConnor agreeing with Chief Justice Rehnquist 79% of the time while Justice Thomas and Justice Scalia agreed in full only 73% of time.

This notion that blacks choose not to think for themselves is not entirely foreign to these critics; The Timess Dowd and Rich have joked that blacks who spoke at the Republican 2000 convention participated in a minstrel show.

Thomass own explanation for his ideas is less conspiratorial. He thinks too many of these policies are premised on the idea that blacks are an inferior race. In 1995, the Supreme Court heard Missouri v. Jenkins, where the Kansas City school district was attempting to, in part, correct racial imbalances by opening schools catering to whites in a neighborhood they had long ago abandoned. In a concurring opinion, Thomas writes:

It never ceases to amaze me that the courts are willing to assume that anything that is predominately black must be inferior. Instead of focusing on remedying the harm done to those black schoolchildren injured by segregation, the District Court here sought to convert the Kansas City, Missouri, School District (KCMSD) into a magnet district that would reverse the white flight caused by desegregation. Racial isolation itself not a harm; only state-enforced segregation is. After all, if separation is a harm, and if integration therefore is the only way that blacks can receive a proper education, then there must be something inferior about blacks. Under this theory, segregation injures blacks because blacks, when left on their own, cannot achieve.

Its unclear whether affirmative action supporters professed ideal of racial equality better representstheir actual thinking than preferences implications of black inferiority. President Biden certainly didnt help the former case when he speculated aloud to The Washington Post about why Iowas public schools outperform D.C.s: Theres less than 1 percent of the population of Iowa that is African American. There is probably less than 4 or 5 percent that are minorities. What is in Washington? So look, it goes back to what you start off with, what youre dealing with. More recently Biden said, Poor kids are just as bright and just as talented as white kids.

The preoccupation with means over ends exacts a toll on blacks, says Thomas. By pursuing busing programs meant to intermix students usually at the expense of quality education, blacks are essentially being used as guinea pigs in the experiments of white social scientists. This not only is demoralizing, but suggests that without whites, blacks are hopeless.

Once more, blacks become reliant on whites, and a theory that tacitly assumes black inferiority helps make it real. A 2019 Pew Research Center poll finds that, with the Great Society 57 years deep, black Americans are historically pessimistic, with more than 80 percent viewing their race as an impediment. Curiously, the more educated the respondent, the more likely he was to see his race as an obstacle, and half say America will never achieve racial equity.

Whites, too, see less progress, per Pew, but are twice as likely to be optimistic. This suggests efforts at racial redress atone for white guilt twice as well as they do boosting black progress. For this reason, Thomas sometimes says, racial preferences are intended more for their sponsors than their recipients.

Perhaps conventional repulsion for Republicans explains why more blacks havent had similar re-appraisals to the governments efforts to improve their lot. While skepticism of government social work may well be an aspect of conservative political philosophy, for Thomas, its merely an affirmation of his lifes experiences. He is conservative, in other words, because he is black.

And it no longer matters what anybody says, a declaration he made in a 1998 Memphis speech. Speaking before the National Bar Association, a black lawyers group, Thomas did not apologize for his heretical beliefs. Instead, he said this:

I have come here today, not in anger or to anger, though my mere presence has been sufficient, obviously, to anger some. Nor have I come to defend my views, but rather to assert my right to think for myself, to refuse to have my ideas assigned to me as though I was an intellectual slave because Im black. I come to state that I am a man, free to think for myself and do as I please. I have come to assert that I am a judge and I will not be consigned the unquestioned opinions of others.

This attitude is clearly unhelpful to those promoting preferences. Thomas, after all, is right: Racial preferences tar reputations. He has achieved the uttermost prominence, yet no matter what he achieves, it seems, his critics still argue he owes everything to preferences.

Dowd began an article, He knew he could not make a powerful legal argument against racial preferences, given the fact that he got into Yale Law School and got picked for the Supreme Court thanks to his race. Of Thomass nomination to the EEOC, Toobin says, Though Thomas doesnt say so directly, its clear he was given the job because he was black.

Stripping pride from a mans achievements is certainly an indecent thing. One wonders how Toobin would feel if it were constantly alleged that the only reason he got his job at CNN was because hes a dyed-blue-in-the-wool Democrat. And how would Dowd respond to accusations that the only reason she owns premier real estate on The New York Times editorial page is because shes a woman?

This claim effectively imparts ownership of racial-preference recipients achievements to those administering these programs. If thats the choice diminished personal sovereignty, or liberty Thomas would rather be free, even if it means he fails. This is why Thomas began his Grutter v. Michigan opinion with a quote from his hero, Frederick Douglass:

Your doing with us has already played the mischief with us. Do nothing with us! If the apples will not remain on the tree of their own strength, if they are worm-eaten at the core, if they are early ripe and disposed to fall, let them fall! And if the negro cannot stand on his own legs, let him fall also. All I ask is, give him a chance to stand on his own legs! Let him alone! [Y]our interference is doing him positive injury.

Thomass life has been a struggle to stand alone, and he knows there are others. Hes long nursed an urge to return to Georgia and help his old neighborhood. At a book-signing party, he was asked whether hed prefer any job over his current assignment. He could think of only one: a small or medium-sized business somewhere in the South where he could be a part of my community.

Thomas has often passed up these opportunities on the belief that positions of greater prominence held greater capacity for reform. Yet each step has met ever greater condemnation sometimes infected with accusations of racial traitorship, always leading to the same regrettable conclusion: Black minds arent ready to be free.

But having untangled himself from the pull-strings of racial groupthink, leftist social dogma, political ideology, and popular opinion, Thomas was recently able to proclaim himself the freest man on the court. Its this, his intellectual emancipation, that most infuriates his leftist critics. In proving their entire worldview fraudulent, Thomas continues to attract racist abuse because thats all they have left to hurl.

Witnessing the way white progressives resort to racism the moment a black man breaks free from his intellectual shackles, surely younger black thinkers will realize theres no value in accepting a set of beliefs simply because they were born a certain race. That would surely make Justice Thomas happy.

Tom Elliott is the founder and editor of Grabien. Follow him on Twitter @tomselliott.

Go here to see the original:

Why The Racist Left Smears Clarence Thomas As An 'Angry Black Man' - The Federalist

Impact of the covid-19 pandemic on medical school applicants – The BMJ

The covid-19 pandemic has not discouraged applications to medical school. Viktorija Kaminskaite and Anna Harvey Bluemel investigate how much has changed in the application process since the start of the pandemic, and how students are adapting

Since 2010 the numbers of medical school places have risen by 31% (British Medical Association), with a corresponding increase in applications for those places. The Universities and Colleges Admissions Service (UCAS) reported that medical applications increased by around 20% in 2020.1 Continuing disruptions to education are likely to have a lingering effect on applications in years to comeUCAS also reported a 47% increase in reapplications to medicine in 2021, suggesting that more students than in previous years were unable to secure a place during their first round of applications.2 Prospective candidates have been forced to adapt to new application processes and navigate increased uncertainty. Alongside the problems facing all potential medical candidates, the covid-19 pandemic has threatened to widen already existing inequalities in admissions, particularly the gap in recruitment of students from lower socioeconomic backgrounds.3

Medical work experience is often considered vital for prospective applicants to gain an understanding of a career in medicine, and to provide experiences that can form the basis of applications. When lockdowns were announced in March 2020, non-essential staff were pulled from clinical areas, cancelling planned work experience. As in many other areas, medical students

Read more from the original source:

Impact of the covid-19 pandemic on medical school applicants - The BMJ