The Artificial Intelligence Revolution: Part 2 – Wait But Why

Note: This is Part 2 of a two-part series on AI. Part 1 is here.

PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)

___________

We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. Nick Bostrom

Welcome to Part 2 of the Wait how is this possibly what Im reading I dont get why everyone isnt talking about this series.

Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how its all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI thats at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement weve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:

This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI thats way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11 open these

Before we dive into things, lets remind ourselves what it would mean for a machine to be superintelligent.

A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someones first thought when they imagine a super-smart computer is one thats as intelligent as a human but can think much, much faster2they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.

That sounds impressive, and ASI would think much faster than any human couldbut the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isnt a difference in thinking speedits that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps brains do not. Speeding up a chimps brain by thousands of times wouldnt bring him to our leveleven with a decades time, he wouldnt be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.

But its not just that a chimp cant do what we do, its that his brain is unable to grasp that those worlds even exista chimp can become familiar with what a human is and what a skyscraper is, but hell never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, its beyond him to realize that anyone can build a skyscraper. Thats the result of a small difference in intelligence quality.

And in the scheme of the intelligence range were talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3

To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimps incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to uslet alone do it ourselves. And thats only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to antsit could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.

But the kind of superintelligence were talking about today is something far beyond anything on this staircase. In an intelligence explosionwhere the smarter a machine gets, the quicker its able to increase its own intelligence, until it begins to soar upwardsa machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once its on the dark green step two above us, and by the time its ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that its distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something thats here on the staircase (or maybe a million times higher):

And since we just established that its a hopeless activity to try to understand the power of a machine only two steps above us, lets very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us.Anyone who pretends otherwise doesnt understand what superintelligence means.

Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, well be dramatically stomping on evolution. Or maybe this is part of evolutionmaybe the way evolution works is that intelligence creeps up more and more until it hits the level where its capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:

And for reasons well discuss later, a huge part of the scientific community believes that its not a matter of whether well hit that tripwire, but when. Kind of a crazy piece of information.

So where does that leave us?

Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.

First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction

All species eventually go extinct has been almost as reliable a rule through history as All humans eventually die has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, its only a matter of time before some other species, some gust of natures wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor statea place species are all teetering on falling into and from which no species ever returns.

And while most scientists Ive come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASIs abilities could be used to bring individual humans, and the species as a whole, to a second attractor statespecies immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, well be impervious to extinction foreverwell have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and its just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.

If Bostrom and others are right, and from everything Ive read, it seems like they really might be, we have two pretty shocking facts to absorb:

1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.

2) The advent of ASI will make such an unimaginably dramatic impact that its likely to knock the human race off the beam, in one direction or the other.

It may very well be that when evolution hits the tripwire, it permanently ends humans relationship with the beam and creates a new world, with or without humans.

Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?

No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. Well spend the rest of this post exploring what theyve come up with.

___________

Lets start with the first part of the question: When are we going to hit the tripwire?

i.e. How long until the first machine reaches superintelligence?

Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:

Those people subscribe to the belief that this is happening soonthat exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that were not actually that close to the tripwire.

The Kurzweil camp would counter that the only underestimating thats happening is the underappreciation of exponential growth, and theyd compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.

A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that theres no guarantee about that; it could also take a much longer time.

Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that its more likely that ASI wont actually ever be achieved.

So what do you get when you put all of these opinions together?

In 2013, Vincent C. Mller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist? Itasked them to name an optimistic year (one in which they believe theres a 10% chance well have AGI), a realistic guess (a year they believe theres a 50% chance of AGIi.e. after that year they think its more likely than not that well have AGI), and a safe guess (the earliest year by which they can say with 90% certainty well have AGI). Gathered together as one data set, here were the results:2

Median optimistic year (10% likelihood): 2022Median realistic year (50% likelihood): 2040Median pessimistic year (90% likelihood): 2075

So the median participant thinks its more likely than not that well have AGI 25 years from now. The 90% median answer of 2075 means that if youre a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

A separate study, conducted recently by author James Barrat at Ben Goertzels annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achievedby 2030, by 2050, by 2100, after 2100, or never. The results:3

By 2030: 42% of respondentsBy 2050: 25% By 2100: 20%After 2100: 10% Never: 2%

Pretty similar to Mller and Bostroms outcomes. In Barrats survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed dont think AGI is part of our future.

But AGI isnt the tripwire, ASI is. So when do the experts think well reach ASI?

Mller and Bostrom also asked the experts how likely they think it is that well reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4

The median answer put a rapid (2 year) AGI ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.

We dont know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, lets estimate that theyd have said 20 years. So the median opinionthe one right in the center of the world of AI expertsbelieves the most realistic guess for when well hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

Of course, all of the above statistics are speculative, and theyre only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.

Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?

Superintelligence will yield tremendous powerthe critical question for us is:

Who or what will be in control of that power, and what will their motivation be?

The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.

Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Mller and Bostroms survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. Its also worth noting that those numbers refer to the advent of AGIif the question were about ASI, I imagine that the neutral percentage would be even lower.

Before we dive much further into this good vs. bad outcome part of the question, lets combine both the when will it happen? and the will it be good or bad? parts of this question into a chart that encompasses the views of most of the relevant experts:

Well talk more about the Main Camp in a minute, but firstwhats your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people arent really thinking about this topic:

One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if youre just standing on the intersection of the two dotted lines in the square above, totally uncertain.

During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most peoples opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:

Were gonna take a thorough dive into both of these camps. Lets start with the fun one

As I learned about the world of AI, I found a surprisingly large number of people standing here:

The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and theyre convinced thats where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.

The thing that separates these people from the other thinkers well discuss later isnt their lust for the happy side of the beamits their confidence that thats the side were going to land on.

Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say its naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.

Well cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and lets take a good hard look at whats over there on the fun side of the balance beamand try to absorb the fact that the things youre reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to himwe have to be humble enough to acknowledge that its possible that an equally inconceivable transformation could be in our future.

Nick Bostrom describes three ways a superintelligent AI system could function:6

These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the My pencil fell off the table situation, which youd do by picking it up and putting it back on the table.

Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:

There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from impossible to obvious. Move a substantial degree upwards, and all of them will become obvious.7

There are a lot of eager scientists, inventors, and entrepreneurs in Confident Cornerbut for a tour of brightest side of the AI horizon, theres only one person we want as our tour guide.

Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middleauthor Douglas Hofstadter, in discussing the ideas in Kurzweils books, eloquently put forth that it is as if you took a lot of very good food and some dog excrement and blended it all up so that you cant possibly figure out whats good or bad.8

Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. Hes the author of five national bestselling books. Hes well-known for his bold predictions and has a pretty good record of having them come trueincluding his prediction in the late 80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a restless genius by The Wall Street Journal, the ultimate thinking machine by Forbes, Edisons rightful heir by Inc. Magazine, and the best person I know at predicting the future of artificial intelligence by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Googles Director of Engineering.5 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.

This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that hes nothes an extremely smart, knowledgeable, relevant man in the world. You may think hes wrong about the future, but hes not a fool. Knowing hes such a legit dude makes me happy, because as Ive learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweils predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, its not hard to see why he has such a large, passionate followingknown as the singularitarians. Heres what he thinks is going to happen:

Timeline

Kurzweil believes computers will reach AGI by 2029 and that by 2045, well have not only ASI, but a full-blown new worlda time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweils timeline. His predictions are still a bit more ambitious than the median respondent on Mller and Bostroms survey (AGI by 2040, ASI by 2060), but not by that much.

Kurzweils depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.

Before we move onnanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it

Nanotechnology Blue Box

Nanotechnology is our word for technology that deals with the manipulation of matter thats between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).7

To understand the challenge of humans trying to manipulate matter in that range, lets take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, theyd be about 250,000 times bigger than they are now. If you make the 1nm 100nm nanotech range 250,000 times bigger, you get .25mm 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next levelmanipulating individual atomsthe giant would have to carefully position objects that are 1/40th of a millimeterso small normal-size humans would need a microscope to see them.8

Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible for a physicist to synthesize any chemical substance that the chemist writes down. How? Put the atoms down where the chemist says, and so you make the substance. Its as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.

Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.

Gray Goo Bluer Box

Were now in a diversion in a diversion. This is very fun.9

Anyway, I brought you here because theres this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, thered be a few trillion of them ready to go. Thats the power of exponential growth. Clever, right?

Its clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earths biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (thats the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.

Read the original here:

The Artificial Intelligence Revolution: Part 2 - Wait But Why

The Marketing Store London – TMS

ABOUT US

The Marketing Store grows brands. Their reach, their influence, their sales. We help brands like McDonalds, Carlsberg, adidas and more thrive in our new world of mass connectivity and influence.

We believe every transaction has the potential to be a personalised and influential interaction that in turn can inspire many more.

How? Our ambition is to unlock the true potential in everything we do from single channel briefs to large-scale global brand strategies.

Our heritage, our understanding of all engagement is what drives us. Creativity, consumer intelligence and cutting-edge technology offer a set of bespoke participation tools that make meaningful, contagious and measurable connections with the right audiences.

At The Marketing Store we Move Millions.

OUR WORK

OUR AWARDS

THE MARKETING STORE NEWS

Last week saw the release of Santanders

Instagram increases user censorship options, Facebook, YouTube and Twitter are working

New Instagram Stories features, Snapchats World Lenses and new trend prediction tools

Its now been over a week since the election,

We continue our What I Know Now series

To reinforce our commitment to learning

Link:

The Marketing Store London - TMS

Posted in Tms

Molecular Cloning: Basics and Applications | Protocol

JoVE Science Education Basic Biology Basic Methods in Cellular and Molecular Biology Molecular Cloning

Enter your email to receive a free trial:

A subscription to JoVE is required to view this article. You will only be able to see the first 20 seconds.

Molecular cloning is a set of techniques used to insert recombinant DNA from a prokaryotic or eukaryotic source into a replicating vehicle such as plasmids or viral vectors. Cloning refers to making numerous copies of a DNA fragment of interest, such as a gene. In this video you will learn about the different steps of molecular cloning, how to set up the procedure, and different applications of this technique.

At least two important DNA molecules are required before cloning begins. First, and most importantly, you need the DNA fragment you are going to clone, otherwise known as the insert. It can come from a prokaryote, eukaryote, an extinct organism, or it can be created artificially in the laboratory. By using molecular cloning we can learn more about the function of a particular gene.

Second, you need a vector. A vector is plasmid DNA used as a tool in molecular biology to make more copies of or produce a protein from a certain gene. Plasmids are an example of a vector, and are circular, extra chromosomal, DNA that is replicated by bacteria.

A plasmid typically has a multiple cloning site or MCS, this area contains recognition sites for different restriction endonucleases also known as restriction enzymes. Different inserts can be incorporated into the plasmid by a technique called ligation. The plasmid vector also contains an origin of replication, which allows it to be replicated in bacteria. In addition, the plasmid has an antibiotic gene. If bacteria incorporate the plasmid, it will survive in media that contains the antibiotic. This allows for the selection of bacteria that have been successfully transformed.

The insert and vector are cloned into a host cell organism, the most common used in molecular cloning is E. coli. E. coli grows rapidly, is widely available and has numerous different cloning vectors commercially produced. Eukaryotes, like, yeast can also be used as host organisms for vectors.

The first step of the general molecular cloning procedure is to obtain the desired insert, which can be derived from DNA or mRNA from any cell type. The optimal vector and its host organism are then chosen based they type of insert and what will ultimately be done with it. A polymerase chain reaction, or PCR based method is often used to replicate the insert.

Then by using a series of enzymatic reactions, the insert and digest are joined together and introduced into the host organism for mass replication. Replicated vectors are purified from bacteria, and following restriction digestion, analyzed on a gel. Gel-purified fragments are later sent for sequencing to verify that the inset is the desired DNA fragment.

Lets have a little more detailed look at how molecular cloning is conducted. Before beginning, you will want to plan out your cloning strategy, prior to making any cloning attempt at the bench. For example, any given plasmid vector, will provide you with a finite number of restriction sites to incorporate the insert via the multiple cloning site. Youll need to choose restriction sites that are not found in your insert so that you do not cleave it. You might be left with a situation where you are forced to join a blunt end fragment with one that has an overhang. If so, then using the klenow fragment to set up a blunt end ligation might be your only option to get the insert into your desired vector. Understanding the various molecular cloning tools at your disposal, as well as coming up with a careful strategy before you begin cloning can be an immense time saver.

The source of DNA for molecular cloning can be isolated from almost any type of cell or tissue sample through simple extraction techniques. Once isolated, PCR can be used to amplify the insert.

Once the insert is amplified both it and the vector are digested by restriction enzymes, also known as restriction endonucleases.

Once digested, the insert and vector can be run on a gel and purified by a process called gel purification. With respect to the vector, this step will help to purify linearized plasmid from uncut plasmid, which tends to appear as a high molecular weight smear on a gel.

After gel purifying the digests, the insert is ligated or joined to the plasmid, via an enzyme called DNA ligase.

Generally speaking, it is always a good idea to set up ligations, so that the ratio of insert to vector is 3 to 1, which ensures that only a small amount of vector will self-ligate. Once the ligation has been set up on ice, it is incubated anywhere from 14-25C from 1 hr to overnight.

Next, transformation is performed to introduce the plasmid vector into the host that will replicate it.

Following transformation bacteria are plated on agar plates with antibiotic and incubated overnight at 37C. Because the plasimid contains an antibiotic resistance gene, successful transformation will produce bacterial colonies when grown on agar plates in presence of antibiotics. Individual colonies can then be picked from the transformed plate, placed into liquid growth media in numbered tubes, and put into a shaking incubator for expansion. A small volume of liquid culture is added to a numbered agar plate, while the rest of the culture moves on to plasmid purification. The numbering scheme that denotes the identity of bacterial colonies from which the plasmids will eventually be purified is maintained throughout the plasmid purification process.

A sample of purified plasmid is then cut with restriction enzymes. The digest is then loaded and run on the gel in order to check for the presence of insert, which will verify that the bacterial colony was transformed with a plasmid containing an insert and not self-ligated plasmid. Bacteria verified to have been transformed with an insert-containing plasmid, are expanded for further plasmid purification. Sequencing is used performed as a final verification step to confirm that your gene of interest has been cloned.

Molecular cloning can be used for a near limitless number of applications. For instance, when an mRNA template is reverse transcribed to form cDNA, or complementary DNA, by an enzyme called reverse transcriptase and then PCR is used to amplify the cDNA, molecular cloning can be used to create a cDNA library a library of all of the genes expressed by a given cell type.

Molecular cloning can also be employed to take a series of genes, or gene cluster from one bacterial strain, reorganize them into plasmids that are transformed in another strain, so an entire biosynthetic pathway can be recreated to produce a complex molecule.

Through molecular cloning, a mutant library can be generated by expressing a target plasmid in a special bacterial strain that uses an error prone polymerase when cultured at certain temperatures. The mutations can be characterized by sequencing. Bacteria transformed with mutant genes can then be tested with different drug or chemicals to see which bacterial colonies have evolved to have drug resistance.

Thanks to molecular cloning, reporter genes can be incorporated into DNA plasmids, a common reporter gene is green fluorescent protein or GFP, which emits a green fluorescence when exposed to UV light. A reporter gene can also be inserted into an alphavirus to show infection in mosquitoes and transmissibility in cells.

Youve just watched JoVEs video on molecular cloning. You should now understand how molecular cloning works and how the technique can be used in molecular biology. As always, thanks for watching!

Alphavirus Transducing System: Tools for Visualizing Infection in Mosquito Vectors

Isolation of Ribosome Bound Nascent Polypeptides in vitro to Identify Translational Pause Sites Along mRNA

Optimized Analysis of DNA Methylation and Gene Expression from Small, Anatomically-defined Areas of the Brain

Single Oocyte Bisulfite Mutagenesis

Large Insert Environmental Genomic Library Production

DNA Gel Electrophoresis

Bacterial Transformation: The Heat Shock Method

DNA Ligation Reactions

Restriction Enzyme Digests

Molecular cloning is a set of methods, which are used to insert recombinant DNA into a vector - a carrier of DNA molecules that will replicate recombinant DNA fragments in host organisms. The DNA fragment, which may be a gene, can be isolated from a prokaryotic or eukaryotic specimen. Following isolation of the fragment of interest, or insert, both the vector and insert must be cut with restriction enzymes and purified. The purified pieces are joined together though a technique called ligation. The enzyme that catalyzes the ligation reaction is known as ligase.

This video explains the major methods that are combined, in tandem, to comprise the overall molecular cloning procedure. Critical aspects of molecular cloning are discussed, such as the need for molecular cloning strategy and how to keep track of transformed bacterial colonies. Verification steps, such as checking purified plasmid for the presence of insert with restrictions digests and sequencing are also mentioned.

JoVE Science Education Database. Basic Methods in Cellular and Molecular Biology. Molecular Cloning. JoVE, Cambridge, MA, doi: 10.3791/5074 (2017).

JoVE Immunology and Infection

Aaron Phillips1, Eric Mossel1, Irma Sanchez-Vargas1, Brian Foy1, Ken Olson1

1Microbiology, Immunology, and Pathology, Colorado State University

Reporter constructs can be incorporated into DNA plasmids using molecular cloning. A common reporter gene is green fluorescent protein (GFP), which emits a green fluorescence when exposed to UV light. A reporter gene was inserted into an alphavirus to show viral infection in mosquitoes and viral transmissibility in cells.

JoVE Biology

Sujata S. Jha1, Anton A. Komar1

1Center for Gene Regulation in Health and Disease, Department of Biological, Geological and Environmental Sciences, Cleveland State University

Here, molecular cloning is used to identify translation pause sites in mRNA in a gene of interest. The DNA template is transcribed and translated in vitro followed by the isolation and characterization of nascent polypeptides newly developed amino acid chains.

JoVE Neuroscience

Marc Bettscheider1, Arleta Kuczynska1, Osborne Almeida1, Dietmar Spengler1

1Max Planck Institute of Psychiatry

This video article shows a step-by-step protocol for examining the epigenetic modifications of genomic DNA isolated from the brains of differentially-aged mice through molecular cloning. Molecular cloning techniques are used to analyze DNA methylation of samples from the brain.

JoVE Biology

Michelle M. Denomme1,2,3, Liyue Zhang3, Mellissa R.W. Mann1,2,3

1Department of Obstretrics & Gynaecology, Schulich School of Medicine and Dentistry, University of Western Ontario, 2Department of Biochemistry, Schulich School of Medicine and Dentistry, University of Western Ontario, 3Children's Health Research Institute

The goal of this experiment is to measure DNA methylation in a single oocyte, a female germ cell, with the use of molecular cloning. Nested PCR is used to amplify the regions of DNA followed by molecular cloning to show methylation at CpG dinucleotides, sites where cytosine is next to guanine.

JoVE Biology

Marcus Taupp1, Sangwon Lee1, Alyse Hawley1, Jinshu Yang1, Steven J. Hallam1

1Department of Microbiology and Immunology, University of British Columbia - UBC

Here, researchers collected native biomass samples to isolate pieces of genomic DNA and use molecular cloning to ligate DNA fragments of appropriate size into fosmid vectors. Fosmids are cloning vectors that are based on the bacterial F (fertility)-plasmid, which can hold relatively large inserts . DNA from the transformed bacteria is packaged into virus particles to create a phage genomic DNA library.

JoVE (Journal of Visualized Experiments) is the worlds first PubMed-indexed scientific video journal. Its mission is to advance scientific research and education by increasing productivity, reproducibility, and efficiency of knowledge transfer for scientists, educators, and students worldwide through visual learning solutions.

See the original post here:

Molecular Cloning: Basics and Applications | Protocol

Nihilism Nihilism

Why Nihilism, A Practical Definition

As research probes further into the complexities of the human mind, it becomes clear that the mind is far from being a composite thing which is an actor upon its world through thoughts; rather, thoughts compose the mind, in the form of connections and associations wired into the tissue of the brain, creating circuitry for future associations of like stimulus. The schematic of this intellectual machine builds separate routing for situations it is likely to encounter, based on grouped similarities in events or objects. In this view of our computing resources, it is foolish to allow pre-processing to intervene, as it creates vast amounts of wiring which serve extremely similar purposes, thus restricting the range of passive association (broad-mindedness) or active association (creativity) possible within the switching mechanism of the brain as a whole. As here we are devout materialists, the brain and mind are seen as equatable terms.

The positive effects of nihilism on the mind of a human being are many. Like the quieting of distraction and distortion within the mind brought about by meditative focus, nihilism pushes aside preconception and brings the mind to focus within the time of the present. Influences which could radically skew our perceptions emotions, nervousness, paranoia, or upset, to name a few fade into the background and the mind becomes more open to the task at hand without becoming spread across contemplations of potential actions occurring at different levels of scale regarding the current task. Many human errors originate in perceiving an event to be either more important than it is, or to be symbolically indicative of relevance on a greater scale than the localized context which it affects, usually because of a conditioned preference for the scale of eventiture existing before the symbolic event.

Nihilism as a philosophical doctrine must not be confused with a political doctrine such as anarchism; political doctrines (as religions are) remain fundamentally teleological in their natures and thus deal with conclusions derived from evidence, where nihilism as a deontological process functions at the level of the start of perception, causing less of a focus on abstracting a token ruleset defining the implications of events than a rigorous concentration on the significance of the events as they are immediately effecting the situation surrounding them. For example, a nihilistic fighter does not bother to assess whether his opponent is a better fighter or not that the perceiving agency, but fights to his best ability (something evolution would reward, as the best fighter does not win every fight, only most of them). As a result of this conditioning, nihilism separates the incidence of events/perceptions from causal understanding by removing expectations of causal origins and implications to ongoing eventiture.

Understanding nihilism requires one drop the pretense of nihilistic philosophy being an endpoint, and acceptance of it being a doorway. Nihilism self-reduces; the instant one proclaims There is no value! a value has been created. Nihilism strips away conditiong at the unconscious and anticipatory levels of structure in the mind, allowing for a greater range of possiblity and quicker action. Further, it creates a powerful tool to use against depression or anxiety, neurosis and social stigma. Since it is a concept necessarily in flux, as it provides a starting point for analysis in any situation but no preconditioned conclusions, it is post-deconstructive in that it both removes the unnecessary and creates new space for intellectual development at the same time.

Text quoted from S.R. Prozaks Nihilism at the American Nihilist Underground Society.

Visit link:

Nihilism Nihilism

Atheism | Topics | Christianity Today

And then I shared it with the man the government sent to kill me.

Virginia Prodan / September 23, 2016

What life was like for unbelievers long before Christopher Hitchens and company arrived on the scene.

Timothy Larsen / August 22, 2016

I had no untapped, unanswered yearnings. All was well in the state of Denmark. And then it wasnt.

Nicole Cliffe / May 20, 2016

How I learned to see my unbelieving husband through Gods eyes.

Stina Kielsmeier-Cook, guest writer / May 19, 2016

What we really need, says Kevin Seamus Hasson, is a different understanding of the God our nation is under.

Interview by Matt Reynolds / March 18, 2016

(UPDATED) However, survey also finds Trump is one of few candidates who doesn't have to be religious to be deemed great.

Sarah Eekhoff Zylstra / January 27, 2016

The bombing of Hiroshima and Nagasaki destroyed Joy Davidmans worldview, too.

Abigail Santamaria / August 18, 2015

And everything else. How I learned hes an all-or-nothing Lord.

Craig Keener / May 20, 2015

Nancy Pearcey equips believers with tools to expose error and promote truth.

Richard Weikart / April 10, 2015

Planting in Highly-Church Areas; Atheists Believe in Heaven; Alex and Brett Harris

Ed Stetzer / November 24, 2014

End of Mideast Christianity?; Atheism in China; Exercising Power and Wise Boundaries

Ed Stetzer / November 19, 2014

Inside my own revolution.

Guillaume Bignon / November 17, 2014

But question remains: Will IRS agree with DOJ that atheists count as 'ministers of the gospel'?

Sarah Eekhoff Zylstra / November 13, 2014

Sarah Bowler on what she's learned about God from unbelievers.

Ed Stetzer / October 16, 2014

Bart Campolo's departure from Christianitysome reflections about faith and (our) families.

Ed Stetzer / September 30, 2014

The temptation of utilitarianism.

Amy Julia Becker / September 12, 2014

Humanists say LifeWay Research was biased, but both polls are helpful

Ed Stetzer / September 5, 2014

IRS and Atheists; Getting Fired from Your First Pastorate; Transformational Churches

Ed Stetzer / August 12, 2014

New survey finds even liberals largely favor Christians over other types of marriage partners.

Kate Tracy / June 17, 2014

How the realm of make-believe can bring us toward God.

Rachel Marie Stone / June 10, 2014

What a Kentucky court ruling implies for a high-profile Wisconsin challenge to the clergy housing allowance.

Sarah Eekhoff Zylstra / May 21, 2014

For the UK writer, Christianity must first make sense in the realm of lived experience.

Interview by John Wilson / April 3, 2014

NYT's 3 Worst Corrections on Christian Holidays; 3 Questions for Managing Your Boss; Who are the "Nones"?

Ed Stetzer / April 2, 2014

Are Evangelicals Bad for Marriage?; Is Atheism Irrational?; Great Teammates

Ed Stetzer / February 17, 2014

Were confused by one California pastors year without God.

Laura Turner / January 9, 2014

Pentecostals and Charismatics; Life Apart from God; Ted Turner and Heaven

Ed Stetzer / November 21, 2013

City Density; Bias Toward Action; Atheist Megachurches

Ed Stetzer / November 18, 2013

Despite outpouring of support, a few fellow students remain critical of atheist senior at Northwest Christian.

Timothy C. Morgan / November 12, 2013

(UPDATED) Legal challenge to pastor tax break takes 'fascinating turn.'

Jeremy Weber / August 19, 2013

A new study highlights important differences between nonbelievers. But they have many things in common, too.

George Yancey / August 12, 2013

Willow Creek; Young Atheists; Kingdom of God; Beth Moore on The Exchange

Ed Stetzer / June 10, 2013

I tried to face down an overwhelming body of evidence, as well as the living God.

Jordan Monge / April 4, 2013

As a leftist lesbian professor, I despised Christians. Then I somehow became one.

Rosaria Champagne Butterfield / February 7, 2013

Susan Jacoby's biography of Robert Ingersoll mistakes a likeable fellow with a second-rate mind for a "freethinking" hall-of-famer.

Timothy Larsen / January 29, 2013

Children are statistically significant factor in church attendance by atheist scientists.

Melissa Steffan / January 2, 2013

U.K. group would offer alternative 'Scout Promise' that removes reference to God.

Melissa Steffan / December 18, 2012

In my questions for God, I'm like my kids. Sometimes sincere in my doubts. Sometimes whiny, repetitive, insistent. Often not even asking God directly but allowing my doubts to protect me from talking to God, or listening to God, at all.

December 10, 2012

Leah Libresco announced her conversion Monday after lengthy exploration of morality on her blog.

Jeremy Weber / June 19, 2012

See the original post:

Atheism | Topics | Christianity Today

NASA – Astronomy Picture of the Day – apod.nasa.gov

Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.

2017 January 25

Explanation: Cassini is being prepared to dive into Saturn. The robotic spacecraft that has been orbiting and exploring Saturn for over a decade will end its mission in September with a spectacular atmospheric plunge. Pictured here is a diagram of Cassini's remaining orbits, each taking about one week. Cassini is scheduled to complete a few months of orbits that will take it just outside Saturn's outermost ring F. Then, in April, Titan will give Cassini a gravitational pull into Proximal orbits, the last of which, on September 15, will impact Saturn and cause the spacecraft to implode and melt. Cassini's Grand Finale orbits are designed to record data and first-ever views from inside the rings -- between the rings and planet -- as well as some small moons interspersed in the rings. Cassini's demise is designed to protect any life that may occur around Saturn or its moons from contamination by Cassini itself.

Continue reading here:

NASA - Astronomy Picture of the Day - apod.nasa.gov

Russian Futurism – Wikipedia

Russian Futurism was a movement of Russian poets and artists who adopted the principles of Filippo Marinetti's "Futurist Manifesto".

Russian Futurism may be said to have been born in December 1912, when the Moscow-based literary group Hylaea (Russian: [Gileya]) (initiated in 1910 by David Burlyuk and his brothers at their estate near Kherson, and quickly joined by Vasily Kamensky and Velimir Khlebnikov, with Aleksey Kruchenykh and Vladimir Mayakovsky joining in 1911)[1] issued a manifesto entitled A Slap in the Face of Public Taste (Russian: ).[2] Other members included artists Mikhail Larionov, Natalia Goncharova, Kazimir Malevich, and Olga Rozanova.[3] Although Hylaea is generally considered to be the most influential group of Russian Futurism, other groups were formed in St. Petersburg (Igor Severyanin's Ego-Futurists), Moscow (Tsentrifuga, with Boris Pasternak among its members), Kiev, Kharkov, and Odessa.

Like their Italian counterparts, the Russian Futurists were fascinated with the dynamism, speed, and restlessness of modern machines and urban life. They purposely sought to arouse controversy and to gain publicity by repudiating the static art of the past. The likes of Pushkin and Dostoevsky, according to them, should be "heaved overboard from the steamship of modernity". They acknowledged no authorities whatsoever; even Filippo Tommaso Marinetti, when he arrived in Russia on a proselytizing visit in 1914, was obstructed by most Russian Futurists, who did not profess to owe him anything.

In contrast to Marinetti's circle, Russian Futurism was primarily a literary rather than a plastic philosophy. Although many poets (Mayakovsky, Burlyuk) dabbled with painting, their interests were primarily literary. However, such well-established artists as Mikhail Larionov, Natalia Goncharova, and Kazimir Malevich found inspiration in the refreshing imagery of Futurist poems and experimented with versification themselves. The poets and painters collaborated on such innovative productions as the Futurist opera Victory Over the Sun, with music by Mikhail Matyushin, texts by Kruchenykh and sets contributed by Malevich.

Members of Hylaea elaborated the doctrine of Cubo-Futurism and assumed the name of budetlyane (from the Russian word budet 'will be'). They found significance in the shape of letters, in the arrangement of text around the page, in the details of typography. They considered that there is no substantial difference between words and material things, hence the poet should arrange words in his poems like the artist arranges colors and lines on his canvas. Grammar, syntax, and logic were often discarded; many neologisms and profane words were introduced; onomatopoeia was declared a universal texture of verse. Khlebnikov, in particular, developed "an incoherent and anarchic blend of words stripped of their meaning and used for their sound alone",[4] known as zaum.

With all this emphasis on formal experimentation, some Futurists were not indifferent to politics. In particular, Mayakovsky's poems, with their lyrical sensibility, appealed to a broad range of readers. He vehemently opposed the meaningless slaughter of World War I and hailed the Russian Revolution as the end of that traditional mode of life which he and other Futurists ridiculed so zealously.

War correspondent Arthur Ransome and five other foreigners were taken to see two of the Bolshevik propaganda trains in 1919 by their organiser, Burov. He first showed them the "Lenin", which had been painted a year and a half ago when, as fading hoardings in the streets of Moscow still testify, revolutionary art was dominated by the Futurist movement. Every carriage is decorated with most striking but not very comprehensible pictures in the brightest colours, and the proletariat was called upon to enjoy what the pre-revolutionary artistic public had for the most part failed to understand. Its pictures are art for arts sake, and can not have done more than astonish, and perhaps terrify, the peasants and the workmen of the country towns who had the luck to see them. The "Red Cossack" is quite different. As Burov put it with deep satisfaction, At first we were in the artists hands, and now the artists are in our hands (The other three trains were the "Sverdlov", the "October Revolution", and the "Red East"). Initially the Department of Proletarian Culture had delivered Burov bound hand and foot to a number of Futurists , but now the artists had been brought under proper control.[5]

After the Bolsheviks gained power, Mayakovsky's grouppatronized by Anatoly Lunacharsky, Bolshevik Commissar for Educationaspired to dominate Soviet culture. Their influence was paramount during the first years after the revolution, until their programor rather lack thereofwas subjected to scathing criticism by the authorities. By the time OBERIU attempted to revive some of the Futurist tenets during the late 1920s, the Futurist movement in Russia had already ended. The most militant Futurist poets either died (Khlebnikov, Mayakovsky) or preferred to adjust their very individual style to more conventional requirements and trends (Aseyev, Pasternak).

See the article here:

Russian Futurism - Wikipedia

New Jersey 's Holistic Doctors – Natural Jersey

Advanced Health & Wellness Dr. Geoffrey Channon Reed, Chiropractic Physician Clinton, NJ 908-735-8988 Services include Chiropractic, rehabilitation & massage ``````````````````````````````````````````````````````````````````````` A Life in Balance Nutrition Katie Vnenchak Holistic Nutritionist and Meditation Coach 1 Stangl Rd. Flemington, NJ 08822 732-864-6063 alifeinbalancept.com

Weight Loss, Kid's Nutrition, Meditation, Nutritional Therapy ``````````````````````````````````````````````````````````````````````` Bellewood Wellness Center Rt. 614 Pattenburg, NJ Services include massage, yoga, reiki, acupuncture & more ``````````````````````````````````````````````````````````````````````` Creative Alternatives of NJ, LLC Karolyn Saracino, BA, CMT Califon, NJ Craniosacral therapy, feng shui & integrative bodywork ``````````````````````````````````````````````````````````````````````` Divine Health, LLC 1390 Rt. 22 West #204 Lebanon, NJ 908-236-8042 Whole food nutrition, health & wellness & nutrition response testing ```````````````````````````````````````````````````````` Dr. Fuhrman's Medical Associates 4 Walter E. Foran Blvd. Flemington, NJ Joel Fuhrman, M.D. Jay Benson, D.O. Kathleen Mullin, M.D. Jyoti Matthews, M.D. Michael Klaper, M.D.

Continuing and comprehensive health care for adults and children. Dr. Fuhrman specializes in preventing and reversing disease through a nutrient rich diet. He has also created The Nutritional Education Institute to provide education and training to those interested in pursuing nutritional science as a therapeutic intervention for disease reversal and prevention. ```````````````````````````````````````````````````````````````````````

Eat Holistic, LLC Kirstin Nussgruber, C.N.C., EMB Holistic Cancer-Fighting Nutritional Consulting Special attention given to Cancer Patients, Cancer Survivors and Cancer Prevention Education eatholistic@gmail.com 908.512.2220 ``````````````````````````````````````````````````````````````````````` Family Chiropractic Center Dr. John Dowling, D.C. Flemington, NJ 908-788-5050 Gentle low force chiropractic

This great little gadget will suppress the excess high frequency Electromagnetic Frequencies (EMF) leaking into your home!

Go here to read the rest:
New Jersey 's Holistic Doctors - Natural Jersey

Why Donald Trump’s Recent NATO Comments Caused Such an Uproar …

Donald Trump shocked foreign-policy professionals and observers when he remarked to The New York Times that if he were president, the United States might not come to the defense of an attacked NATO ally that hadnt fulfilled its obligation to make payments. The remark broke with decades of bipartisan commitment to the alliance and, as Jeffrey Goldberg wrote in The Atlantic, aligned well with the interests of Russia, whose ambitions NATO was founded largely to contain. One Republican in Congress openly wondered whether his partys nominee could be seemingly so pro-Russia because of connections and contracts and things from the past or whatever.

Its not unlike Trump to make shocking statements. But these ones stoked particular alarm, not least among Americas allies, about the candidates suitability for the United States presidency. So whats the big deal? What does NATO actually do?

It's Official: Hillary Clinton Is Running Against Vladimir Putin

The North Atlantic Treaty Organization was formedthree years, two months, and 10 days after Donald J. Trump was bornto keep peace in post-World War II Europe. But Lord Hastings Ismay, the alliances first secretary general and a friend of Winston Churchill, is said to have remarked that the alliance really had three purposes: to keep the Russians out, the Americans in, and the Germans down.

The treaty had evolved out of an initiative of the so-called Benelux countries (the vertical stripe of Europe comprising Belgium, the Netherlands, and Luxembourg), who were worried above all about keeping Germany down after World War II. In signing on, the 12 original members who joined in 1949 agreed to uphold peace and international law among themselves. And importantly, they agreed to Article 5, which can obligate member states to come to one anothers defense should one of them be attacked in continental Europe or North America (or in territories north of the Tropic of Cancer). An additional 16 countries have joined since the alliances founding.

During the Cold War, though, keeping Russia out became priority one. It stayed a priority, to one degree or another, even after the breakup of the Soviet Union. In 2014, with Russias invasion of Ukraine raising concerns that a NATO state could be next, the alliance made its most formal statement about minimum defense spending obligations each member owed. Each country, the alliance stated, should try to meet the goal of spending 2 percent of its GDP on defense within a decade. It was those obligations Trump was referring tobut unlike the Article 5 collective-defense requirement, the spending target is not legally binding.

Trumps comments throw the keeping America in function of NATO into question for the first time. I asked Michael Mandelbaum of Johns Hopkins Universitys School of Advanced International Studies, who is an expert on NATO and American foreign policy, what it would mean if Trump put his ideas about the alliance into practice, and about what role the alliance has played historically. Mandelbaum is the author of Mission Failure: America and the World in the Post-Cold War Era. In addition to detailing how NATO has helped constrain European nations from fighting among themselves, Mandelbaum followed up after our conversation to note one more benefit of the alliance: NATO has been an effective measure against nuclear proliferation. Security guarantees may have helped prevent countries like Germany and Japan from seeking their own nuclear weapons (a legacy Trump has also questioned). Our conversation has been edited and condensed for clarity.

Nicholas Clairmont: If a NATO country were invaded [and invoked] Article 5, and the other member states didnt come to its defense, what would happen?

Michael Mandelbaum: Well, they would be violating their treaty obligations. And so you would have to assume that the North Atlantic Treaty and NATO as a military organization would become null and void.

Clairmont: One of the positive effects of NATO that is sometimes touted is that NATO countries generally don't go to war with one another. Is that valid?

Mandelbaum: That has generally been true. You might make an exception for the Turkish invasion and occupation of the northern part of Cyprus.

NATO turned out to part of the solution to the problem that had bedeviled and in some ways devastated Europe for 75 years, between the beginning of the Franco-Prussian War and the end of World War II. And that is the German problem, which was how to fit Germany into Europe in a way that was acceptable both to Europe and to Germany. Dividing Germany, and enveloping its two parts in military alliances led by a stronger power, turned out to be a stable solution. So, it did serve that purpose. And it certainly helped to deter the Soviet Union. Theres a lot of debate about whether Stalin or Krushchev was ever really serious about invading. But its an unanswerable question even with the Russian documents, and we don't have all of them. And its particularly unanswerable, if I can use that ungrammatical construction, because we dont know what Soviet attitudes would have been if there had been no NATO.

Clairmont: What do you think about Trumps comments about NATO in general? Do you think making them was a good idea?

Mandelbaum: Well, they were certainly irresponsible. Although you have to qualify that, because to call them irresponsible might imply that Trump really had an understanding of what he was doing. And I dont get the impression that he does.

I think his two defining features are his temperament, and his ignorance.

Clairmont: His claim is: Its bad for the U.S. to go on sustaining NATO, because we pay a great deal more for our defense, by percent, than do a lot of other NATO members. And thats the only reason the alliance is sustainable, and that we need to make a credible threat that America is willing to walk away and stop basically footing the bill for NATO, in order to get everyone else to pay up. One of the things Im exploring is that he has not understood how much value NATO provides to the United States.

Mandelbaum: He looks at everything as a real estate dealthat we're not getting enough.

I would make two points. One is that, although the burden of the common defense is a bit lopsidedwith the United States paying more than what American administrations have considered our fair shareits not as lopsided as Donald Trump seems to think. Americas allies really do make contributions. Especially in Asia. And, it also must be borne in mind that the United States has a global military. So, a lot of the American defense budget, and the budget that can be assigned to NATO or to Japan, is naval and air force. Which, presumably, the United States would want to have anyway. Maybe not to the same extent, but the Navy is a senior service. Weve had one since the early 19th century. We're not going to give it up. So that's the first point.

The second point is: I do think that one consequence of what Trump has been saying, and what Obama said in the Jeffrey Goldberg interview [for The Atlantic cover story The Obama Doctrine], is that whoever is elected, there will be pressure to get the Europeans to pay more. If Mrs. Clinton is elected, she will feel that pressure, because its been placed on the national agenda as an issue.

Clairmont: Do you see a connection at all between Trumps equivocation about honoring NATO Article 5, and Obamas distinction between core and non-core interests, and [his discussion of] free riders, in The Obama Doctrine?

Mandelbaum: Well, they're connected by inference. But if you have signed a treaty to protect a country such as Estonia, Latvia, or Lithuania, that would seem to make it a core interest.

Clairmont: Russia has made military incursions in Chechnya, Georgia, Moldova, and Ukraine, all non-NATO countries. And one gets the sense that [Russian President Vladimir] Putin has designs on Estonia [as well as the other Baltic states Latvia and Lithuania], which are NATO countries. But he hasnt done anything in those countries. Is this because NATO, so far, works?

Mandelbaum: I think the fact that Ukraine and Georgia were not in NATO certainly made them attractive targets. And now the Baltic states are in question. Theyre not defensible, at least not with the force the United States and NATO have there. So they are in some sense the equivalent to the Cold War status of West Berlin. But Putin has lots of ways to harass the Baltics: cyberattacks, stirring up ethnic Russians. So, he could make a lot of trouble for Estonia, Latvia, Lithuania, without having Russian troops cross the border between them and Russia.

When NATO expansion was proposed it was presented by the Clinton administration as being a way to unite Europe. And those of us who were opposed 20 years ago said: To the contrary, this is going to create a line of division in Europe. And so it did. It would have been a line of division if only Russia had been excluded. But for various reasons Georgia and Ukraine were also excluded, and now they are in no-mans land.

Clairmont: Walter Russell Mead, the foreign-policy writer and my former boss, sometimes says that if you put up signs over one half of a lake that say no fishing, people are going to make an assumption about the other half of that lake.

Mandelbaum: There is something to that.

I think that although NATO expansion was a terrible mistakeand a very costly one, in that Russia might well have a different foreign policy than it does if not for NATO expansion and all that followedprecisely because of what Russia has become, there is a need for NATO. Europe is important to the United States. But its true that the Europeans pay less than what every American president since Eisenhower regarded as their fair sharePresident Obama called the Europeans free riders, and to some degree indeed they are. They have been for over 60 years, dating back to 1952 and the Lisbon Agreement [on NATO Force Levels]. The idea was that NATO should have many more ground troops than it had, and they would come from the Europeans. But the Europeans never stumped up.

Clairmont: Can you tell me more about the Lisbon Agreement? The discussion of force levels did not begin until after the treaty was inked in 49?

Mandelbaum: No, it was a few years afterwards. And there was another, later point at which the Kennedy administration, because of changes in the nuclear balance, adopted a policy of flexible response, which meant that there needed to be more NATO ground troops. And the Europeans agreed in principle, but never supplied them. I wrote about this in the first book that I ever published, called The Nuclear Question.

Clairmont: So, is the requirement to spend 2 percent as binding as the Article 5 collective self-defense requirement? Is it legally required as a term of membership?

Mandelbaum: No, it is not in the treaty.

Clairmont: Do you have any closing points?

Mandelbaum: The Europeans have been not quite been free riders, but they pulled less than their weight. And the case that we are paying an inordinate amount for collective defense is sort of true in the Pacific with Japan. Although, the United States does get economic benefits. That is, the Japanese pay a lot of the cost of the bases, and if we wanted to base American troops in the United States rather than overseas, it would be expensive. So NATO is not exactly a paying proposition, and its not intended to be a paying proposition.

But simply abandoning NATO would be costly, just in economic terms. And it would be very costly in geopolitical terms.

Clairmont: Is NATO worthwhile? Is the world a better, more peaceful place for America's being in NATO and being willing to honor Article 5?

Mandelbaum: Yes, it is.

Christopher I. Haugh contributed reporting.

See the original post here:

Why Donald Trump's Recent NATO Comments Caused Such an Uproar ...

EverGreenCoin – Environmental Green Causes, nurtured by …

What is EverGreenCoin?

EverGreenCoin is much more than a new currency, a new 'cryptocurrency' as it's called. Cryptocurrency is a sort of digital money that can be used as a store of value or in exchange for goods and services. The EverGreenCoin currency itself is only the mechanism leveraged to nourish our more important focus, taking responsible care of our environment and the world we live in.

EverGreenCoin is a decedent of Bitcoin and EverGreenCoin inherited some great traits from its ancestors. Traits like being able to transfer anywhere in the world with near zero fees, regardless of borders. Zero risk to personal information loss or theft because personal information is never required. Zero manipulation by governments and banks because EverGreenCoin is not printed, or 'mined' as the case may be, out of thin air. Rather the supply is finite, predetermined, rates never change, and only the free market dictates its price. But we, the environmentally awake, will determine its true value.

EverGreenCoin has taken its ancestral traits and built upon them, and in ways more friendly for both our planet and the people storing, spending, and receiving value with EverGreenCoin. In large part, this comes from Proof of Stake mining. Proof of Stake replaces the Proof of Work methodology for making transactions happen and securing the record of transactions that have happened in the past. This record is called a blockchain. For maintaining the blockchain through mining, you are rewarded and this is true for both Proof of Work and Proof of Stake.

The difference is that with Proof of Stake, you are not wasting electricity and taking a gamble on what your reward amount might be. With EverGreenCoin your reward is always 7% annual and the energy consumed is no greater than running a word processor on your computer and can be done in the background during times you already have your computer on.

EverGreenCoin is also much faster than Bitcoin. Transaction on the EverGreenCoin network are fully confirmed, which means fully received and spendable, faster than a Bitcoin transaction would get its first confirmation. In what Bitcoin could transfer in an hour, EverGreenCoin could do 10 times over again. Actually 100 time, because of EverGreenCoin's larger blocks also.

Neither traditional banking system nor Bitcoin can give you what EverGreenCoin gives you. In addition, you are helping yourself and all living things by increasing asset potential for EverGreenCoin's environmental aspirations.

It is free to make an EverGreenCoin account. You do not need to surrender any personal information. You do not need a credit check. There are no age or border restrictions. You do not need to make an account on this website, but it is encouraged as it will allow you to communicate with like-minded people. Click here for help deciding which solution is best for your needs.

See the original post:

EverGreenCoin - Environmental Green Causes, nurtured by ...

Germ warfare – mutant bugs could wipe out human life

Written by Patrick Dixon

Futurist Keynote Speaker: Posts, Slides, Videos - Biotechnology, Genetics, Gene Therapy, Stem Cells

Video made in 2010 - below is an archive article which contains important and relevant information as of 2011.

Biological warfare: Threat from mutant viruses, superbugs, and other organisms

The thought of catching a cold and then getting cancer is horrifying. Such a scenario has come a big step closer - A british scientist in Birmingham tried in 1995 to make new mutant superbugs out of human cancer genes and viruses closely related to strains causing common cold.

Although the research was designed to help find a cancer cure, the possibility of accidental escape was alarming. Even more worrying was the thought that a hundred similar or more dangerous experiments might be going on that we had yet to find out about.

Licences are granted every week for work that many might find distasteful, unethical, or dangerous - humanising pigs or fish with extra genes, or releasing microbes into the environment. This is work few want to talk about for fear of public reaction.

The British government admitted in mid 1998 that more than a million people were sprayed from the air in secret germ warfare tests during the 1970s. The strain used was a "harmless" e-coli bacterium together with bacillus globigii. 150 miles of coastline and land 30 miles inland was exposed.

Any human, animal, insect or plant gene can be added to any microbe.

Superbugs are the most powerful gene inventions of all. Each new strain has the potential of a biochemical factory - able to make complex substances like human insulin in a test-tube. Other strains have power to destroy. Researchers need dangerous viruses to develop vaccines and find cures, but there are risks.

The fears over safety justified are however - the same University lost control of smallpox virus in 1978. A woman died, and a catastrophe was only prevented because hospital staff had been immunised against smallpox as children. Smallpox vaccination stopped some time ago so a similar escape in ten years time could cause a huge epidemic.

Escapes of viruses have happened before - in 1973 smallpox virus was released by laboratory in London - two died. In 1985 workers at the same laboratory narrowly missed death when smallpox ampoules were found lying in a biscuit tin in a fridge - dated 1952 but still deadly. Accidents happen.

No vaccine exists against many new mutant microbes - developed with potential for use as weapons. Porton Down Biological Warfare laboratory in the UK is worried - and has made intensive efforts to prepare for germ warfare defence (see letter from Director of Porton Down - Parliamentary written answer).

There were fears of biological weapons in the first Gulf War, with repeated claims by servicemen of possible exposure. We know that germ warfare agents can have long term effects on people and environment, for example, during the Second World War an experiment was made with anthrax spores on Gruinard Island in Scotland, which became uninhabitable for fifty years.

Most mutant viruses are not infectious, harmless and perish fast after release - as we have seen with experiments using soil bugs in agriculture. But limited field trials have found that released microbes can survive in fields and lakes.

Gene changes in one country have potential to affect a whole continent, and ultimately the planet as a whole.

Medical disaster is one thing, perhaps a highly infectious version of HIV, or a new cancer epidemic. Environmental contamination is another. Microbes can travel fast in dust, in water, on car wheels, on clothing, on animals.

Already MPs in 1993 called for a Gene Charter covering ethical and safety issues. Each new headline on gene research show how current legislation is running years behind the technology.

However, there is little point in controls if scientists can get on a plane and continue risky experiments elsewhere. Nothing less than international agreement will do. In most countries of the world much more hazardous experiments are permitted than the ones banned in Britain this week.

A world summit on biotechnology is urgently needed.

Related news items:

Newer news items:

Older news items:

Thanks for promoting with Facebook LIKE or Tweet. Really interested to hear your views. Post below.

Javacript is required for help and viewing images.

1

Read the original here:

Germ warfare - mutant bugs could wipe out human life

Chasing the Scream | The First and Last Days of the War on …

Johann Haris book is the perfect antidote to the war on drugs, one of the most under-discussed moral injustices of our time. It combines rigorous research and deeply human story-telling. It will prompt an urgently-needed debate

A terrific book.

An absolutely stunning book. It will blow people away.

Superb journalism and thrilling story-telling.

Wonderful I couldnt put it down.

An astounding book.

This book is, forgive the obvious phrase, screamingly addictive. The story it tells, jaw-droppingly horrific, hilarious and incredible, is one everyone should know: that it is all true boggles the mind, fascinates and infuriates in equal measure. Johann Hari, in brilliant prose, exposes one of the greatest and most harmful scandals of the past hundred years.

This book is as intoxicatingly thrilling as crack, without destroying your teeth. It will change the drug debate forever.

Incredibly insightful and provocative.

Check out Johann Haris extraordinary new book Chasing the Scream, one of the best books Ive ever read about the world of drugs

Johann Hari has written a drug policy reform book like no other. Many have studied, or conducted, the science surrounding the manifold ills of drug prohibition. But Hari puts it all into riveting story form, and humanizes it Part Gonzo journalism, part Louis CK standup, part Mark Twain storytelling, Chasing the Scream: The First and Last Days of the War on Drugs is beautifully wrought: lively, humorous, and poignant. And, its a compelling case for why the drug war must end, yesterday.

In this energetic and thought-proving book, Hari harnesses the power of the personal narrative to reveal the true causes and consequences of the War on Drugs.

Breath-taking A powerful contribution to an urgent debate

A testament to Haris skill as a writer

Gripping

A riveting book

Superb

This book is an entertainment, a great character study and page-turning storytelling all rolled into one very sophisticated and compelling cry for social justice.

Amazing and bracing and smart. Its really revolutionary.

Scary and terrific

Incredibly powerful

Its incredibly entertaining. Its enormously emotionally affecting It really is an extraordinary book

More here:

Chasing the Scream | The First and Last Days of the War on ...

Euthanasia | Students for Life

Students for Life of America is not merely working to stop abortion in this country, we defend all innocent life from unnatural systematic termination. Euthanasia is an increasingly urgent problem in the United States now so more than ever for a number of reasons:

Euthanasia is a term coming from the Greek for good death, which can mean anything from the acceptable comforting the dying to the deceptive and immoral so-called involuntary euthanasia for, as the Euthanasia Society of America (later renamedwww.worldrtd.net/)put it, idiots, imbeciles, and congenital monstrosities.

Generally, we may define it as the intentional ending of a persons life, through direct action (called active euthanasia) or by omission (called passive euthanasia) usually motivated by a mercy for those in great pain or suffering from a terminal illness[3].

Both euthanasia and abortion are based on a view of man that lacks dignity. Pro-lifers view all life as precious, whether it is that of the elderly, the mentally ill, or even the preborn. Pro-lifers recognize that life is an inalienable right before and regardless of state recognition. Those who call themselves pro-choice view life as notinherentlyvaluable, but as a value given to a human being from a human source such as the government (quality of life, etc.).

Voluntary Euthanasia vs. Involuntary Euthanasia

Voluntary euthanasia is also known as assisted suicide. In such cases, the individual no longer wants to live and enlists the help of a medical professional in either killing them or allowing them to die (ceasing treatment, etc.). The most famous example of assisted suicide is that of Dr. Jack Kevorkianskillingof 130 people, 5 of whom[4]had no disease detected in autopsy. Kevorkian served eight years in prison for second-degree murder.

Asinvoluntarymercy-killing is so obviously repugnant to most people, the most controversial form of euthanasia isnon-voluntarythat is, when the individual is not able to give or deny consent, the most famous example of which is the case of Terry Schiavo.

Terry Schiavo

In 1990, at the age of 26, Terri Schindler Schiavo suffered a mysterious cardio-respiratory arrest. To this day, doctors have still not discovered a cause for this respitory attack. She was diagnosed withhypoxic encephalopathy neurological injury caused by lack of oxygen to the brain and was placed on a ventilator. Terri was soon able to breathe on her own and maintain vital function. She remained in a severely compromised neurological state (a persistent vegetative state[5]) and was provided a PEG tube to ensure the safe delivery of nourishment and hydration. Terri alive was kept alive by assisted feeding[6]of food and water, the same things which keep us all alive.

In March of 2005, Terris family fought a court order her husband had filed to pull her feeding and hydration tubes. For 13 days her family with the pro-life community battle the courts in vain. On March 31, 2005, Terri Schindler Schiavo died of marked dehydration following more than 13 days without nutrition or hydration under the order of Circuit Court Judge George W. Greer of the Pinellas-Pascos Sixth Judicial Court. Terri was 41.

1973- Prior to 1973, euthanasia was illegal in the Netherlands. However, when a doctor convicted of killing her terminally ill mother was sentenced to a week in prison, a precedent was set, and the courts gradually chipped away at the law, allowing for more exceptions to the rule; these exceptions included that the euthanizing must be voluntary and the patient must be terminally ill.

1975-The Karen Ann Quinlan caseeased the distinction between the right to choose ones own death and the right to choose anothers death.

1984- Guidelines for euthanasia were established in the Netherlands, including discussing the situation with the patient, family, and another doctor.

1985- Acourt in the Netherlandsdecided that patients no longer had to be terminally ill to request an assisted suicide.

1985- In the case ofClaire Conroythe debate moved from removing medical treatment, such as a respirator, to defining food and water as optional treatment instead of basic care.

1986- In California,Elizabeth Bouvia, an intelligent, alert woman completely dependent since birth because of Cerebral Palsy, asked for and was granted by the Court, the right to have the hospital assist her to starve to death comfortably. However, after winning in the courts, Ms. Bouvia changed her mind and decided she wanted to continue living.

1987-The New Jersey Supreme Court in the case of Nancy Ellen Jobesset aside the standard of clear and convincing evidence of the patients wishes, and substituted a standard of best judgment from the family. This case changed the focus from the benefit of care for Ms. Jobes to the perceived benefit of life itself as determined by others.

1988- Rhode Island was the first state to hear a food and water case in Federal Court.Marcia Grays case was based on the right to privacy arguments first articulated in the 1973 Roe v. Wade abortion case.This established federal precedent for ordering health care providers to actively assist in carrying out a third partys desire to cause the death of a patient.

1989- The Missouri Supreme Court refused to allow the withdrawal of food and liquids to a severely impaired woman who was not dying.

1993- Criminal Charges were brought against Dr. Jack Kevorkian, who helped an Alzheimers victim commit suicide with a machine he invented. The charges were dropped because Michigan law did not specify that facilitating a suicide is criminal.

1995- By this time in the Netherlands, it was not uncommon for doctors to kill patients without their consent (active involuntary euthanasia), including babies born with birth defects. Also in 1995, the Northern Territories in Australia approved a euthanasia bill; though it became law in 1996, it was overturned the following year.

1995- The Michigan Supreme Court ruled that the wife of a severely brain-damaged man could not remove his feeding tube. The U.S. Supreme Court later rejected the wifes appeal.

1998- The state of Oregon legalizes assisted suicide.

2001- A new (and current) law was introduced in the Netherlands with new guidelines: the patient must be informed, must consent, must consult with his or her doctor and conclude that there is no other reasonable solution, consult with an outside physician, the suffering must be intense with no hope of lessening, and the doctor must exercise due medical care and attention in terminating the patients life or assisting in his/her suicide (Q3A). Minors aged 12-15 may request to be euthanized, but there must be parental consent; minors 16 and older do not need parental consent (Q16A).

The Netherlands Ministry of Foreign Affairs cited loss of dignity as a reason for allowing euthanasia (Q1B). Euthanasia is still a criminal offence as of 2008, but if doctors report it and satisfy due care criteria, then they can be exempted from criminal liability and it will not be reported to the Public Prosecution Service (Q2A). The Ministry explains that the aim of exempting doctors from prosecution is to ensure that they no longer feel like criminals and can act openly and honestly in relation to requests for euthanasia, provided that their decision-making and medical procedures satisfy the statutory due care criteria (Q2B).

In response to objections that doctors ought to save and not end life, the Netherlands Ministry of Foreign Affairs states that: A doctors main duty is indeed to preserve life. Euthanasia is not part of the medical duty of care. However, doctors are obliged to do everything they can to enable their patients to die with dignity. They may not administer pointless medical treatments. When all treatment options have been exhausted, the doctor is responsible for relieving suffering. (Q14A) (from the Dutch Ministry of Foreign Affairs website:http://www.minbuza.nl/binaries/en-pdf/faq-euth-2008-en-geupdate-020408-eng.pdf)

2002- Belgium legalizes euthanasia under many of the same guidelines as the Netherlands.

2005- On March 31, 2005,Terri Schindler Schiavo, aged 41, dies of marked dehydration following more than 13 days without nutrition or hydration under the order of Circuit Court Judge George W. Greer of the Pinellas-Pascos Sixth Judicial Court.

2006- InGonzales vs. Oregon, the United States Supreme Court upheld, in a vote of 6-3, an Oregon law (Death with Dignity Act) allowing patients to commit suicide with the assistance of their doctor. The court cited that the federal government could not override state law.

2008- An Italian court ruled that life support could be removed fromEluana Englaro, a young woman in Milan who has been in a coma[7]for sixteen years.

What you can do:

EducationIt is important that people understand their state laws as they relate to the withdrawal of ordinary provisions. Many laws have changed or have been amended in recent years and your current advanced directive (or lack of one) might be dangerous under the new laws. It is strongly recommended to all people to carefully read current state laws and to secure legal advice when considering them.

AdvocacyIt is encouraged for people to take proactive measures to ensure that their desires for ordinary care be observed. Considering a health care surrogate, a Protective Medical Decisions Directive along with a Will to Live Directive may be an excellent alternative to the traditional living will.

Community InvolvementThrough the internet, public awareness efforts and advocacy for the disabled and elderly, community involvement has a direct and positive impact. Becoming a volunteer is a good way to start.

Further reading: Catholic Education Resource Center Euthanasia Facts Georgia Right to Life: Court Decisions LifeIssues.net Euthanasia Library Medical Articles on Euthanasia PregnantPause on Euthanasia

http://www.terrisfight.org

[1]Life expectancy in the United States is currently 78.24, according to the CIA:https://www.cia.gov/library/publications/the-world-factbook/rankorder/2102rank.html

According to the BBC, average lifespan around the world is around double what it was 200 years ago. http://news.bbc.co.uk/2/hi/health/1977733.stm

[2]The percentage of the population over 60 in the United States is projected to rise to 26% by 2040, from 16.3% today. Source: Brookings Institution, Center for Strategic and International Studies, Congressional Budget Office as cited byhttp://www.washingtonpost.com/wp-srv/business/daily/graphics/ss_020205.html

[3]Traditionally defined as an illness or condition that will cause a persons death within a relatively short time. Some state courts are expanding the term to include a condition in which death will occur if treatment, including nutrition and hydration, is removed.

[4]http://articles.cnn.com/2010-06-14/health/kevorkian.gupta_1_kevorkian-dr-jack-euthanasia-assisted-suicide/3?_s=PM:HEALTH

[5]A condition in which the upper portions of the brain are damaged through disease or injury, but the brain stem is normal. Basic body functions such as breathing and digestion occur, and the individual has sleep-wake cycles. But these patients are not attentive, do not speak or have voluntary muscle movement.

[6]Nutrition that is provided with the help of another. This may be spoon-feeding, through a gastrotomy tube, or through a tube into the vein.

[7]A state of unconsciousness from which the patient cannot be aroused, even by powerful stimulation. This state rarely lasts for more than two to four weeks, by which time the patient dies, enters into a vegetative state, or regains some form of consciousness.

View post:

Euthanasia | Students for Life

Ayn Rand and the Invincible Cult of Selfishness on the …

You can find iterations of this worldview and this moral judgment everywhere on the right. Consider a few samples of the rhetoric. In an op-ed piece last spring, Arthur Brooks, the president of the American Enterprise Institute, called for conservatives to wage a "culture war" over capitalism. "Social Democrats are working to create a society where the majority are net recipients of the sharing economy,'" he wrote. "Advocates of free enterprise ... have to declare that it is a moral issue to confiscate more income from the minority simply because the government can." Brooks identified the constituency for his beliefs as "the people who were doing the important things right--and who are now watching elected politicians reward those who did the important things wrong." Senator Jim DeMintechoed this analysis when he lamented that "there are two Americas but not the kind John Edwards was talking about. It's not so much the haves and the have-nots. It's those who are paying for government and those who are getting government."

Pat Toomey, the former president of the Club for Growth and a Republican candidate for the Senate in Pennsylvania, has recently expressed an allegorical version of this idea, in the form of an altered version of the tale of the Little Red Hen. In Toomey's rendering, the hen tries to persuade the other animals to help her plant some wheat seeds, and then reap the wheat, and then bake it into bread. The animals refuse each time. But when the bread is done, they demand a share. The government seizes the bread from the hen and distributes it to the "not productive" fellow animals. After that, the hen stops baking bread.

This view of society and social justice appeared also in the bitter commentary on the economic crisis offered up by various Wall Street types, and recorded by Gabriel Sherman in New York magazine last April. One hedge-fund analyst thundered that "the government wants me to be a slave!" Another fantasized, "JP Morgan and all these guys should go on strike--see what happens to the country without Wall Street." And the most attention-getting manifestation of this line of thought certainly belonged to the CNBC reporter Rick Santelli, whose rant against government intervention transformed him into a cult hero. In a burst of angry verbiage, Santelli exclaimed: "Why don't you put up a website to have people vote on the Internet as a referendum to see if we really want to subsidize the losers' mortgages, or would we like to at least buy cars and buy houses in foreclosure and give them to people that might have a chance to actually prosper down the road and reward people that could carry the water instead of drink the water!"

Most recently the worldview that I am describing has colored much of the conservative outrage at the prospect of health care reform, which some have called a "redistribution of health" from those wise enough to have secured health insurance to those who have not. "President Obama says he will cover thirty to forty to fifty million people who are not covered now--without it costing any money," fumed Rudolph Giuliani. "They will have to cut other services, cut programs. They will have to be making decisions about people who are elderly." At a health care town hall in Kokomo, Indiana, one protester framed the case against health care reform positively, as an open defense of the virtues of selfishness. "I'm responsible for myself and I'm not responsible for other people," he explained in his turn at the microphone, to applause. "I should get the fruits of my labor and I shouldn't have to divvy it up with other people." (The speaker turned out to be unemployed, but still determined to keep for himself the fruits of his currently non-existent labors.)

In these disparate comments we can see the outlines of a coherent view of society. It expresses its opposition to redistribution not in practical terms--that taking from the rich harms the economy--but in moral absolutes, that taking from the rich is wrong. It likewise glorifies selfishness as a virtue. It denies any basis, other than raw force, for using government to reduce economic inequality. It holds people completely responsible for their own success or failure, and thus concludes that when government helps the disadvantaged, it consequently punishes virtue and rewards sloth. And it indulges the hopeful prospect that the rich will revolt against their ill treatment by going on strike, simultaneously punishing the inferiors who have exploited them while teaching them the folly of their ways.

There is another way to describe this conservative idea. It is the ideology of Ayn Rand. Some, though not all, of the conservatives protesting against redistribution and conferring the highest moral prestige upon material success explicitly identify themselves as acolytes of Rand. (As Santelli later explained, "I know this may not sound very humanitarian, but at the end of the day I'm an Ayn Rand-er.") Rand is everywhere in this right-wing mood. Her novels are enjoying a huge boost in sales. Popular conservative talk show hosts such as Rush Limbaugh and Glenn Beck have touted her vision as a prophetic analysis of the present crisis. "Many of us who know Rand's work," wrote Stephen Moore in the Wall Street Journal last January, "have noticed that with each passing week, and with each successive bailout plan and economic-stimulus scheme out of Washington, our current politicians are committing the very acts of economic lunacy that Atlas Shrugged parodied in 1957."

Christopher Hayes of The Nation recently recalled one of his first days in high school, when he met a tall, geeky kid named Phil Kerpen, who asked him, "Have you ever read Ayn Rand?" Kerpen is now the director of policy for the conservative lobby Americans for Prosperity and an occasional right-wing talking head on cable television. He represents a now-familiar type. The young, especially young men, thrill to Rand's black-and-white ethics and her veneration of the alienated outsider, shunned by a world that does not understand his gifts. (It is one of the ironies, and the attractions, of Rand's capitalists that they are depicted as heroes of alienation.) Her novels tend to strike their readers with the power of revelation, and they are read less like fiction and more like self-help literature, like spiritual guidance. Again and again, readers would write Rand to tell her that their encounter with her work felt like having their eyes open for the first time in their lives. "For over half a century," writes Jennifer Burns in her new biography of this strange and rather sinister figure, "Rand has been the ultimate gateway drug to life on the right."

The likes of Gale Norton, George Gilder, Charles Murray, and many others have cited Rand as an influence. Rand acolytes such as Alan Greenspan and Martin Anderson have held important positions in Republican politics. "What she did--through long discussions and lots of arguments into the night--was to make me think why capitalism is not only efficient and practical, but also moral," attested Greenspan. In 1987, The New York Times called Rand the "novelist laureate" of the Reagan administration. Reagan's nominee for commerce secretary, C. William Verity Jr., kept a passage from Atlas Shrugged on his desk, including the line "How well you do your work ... [is] the only measure of human value."

Today numerous CEOs swear by Rand. One of them is John Allison, the outspoken head of BB&T, who has made large grants to several universities contingent upon their making Atlas Shrugged mandatory reading for their students. In 1991, the Library of Congress and the Book of the Month Club polled readers on what book had influenced them the most. Atlas Shrugged finished second, behind only the Bible. There is now talk of filming the book again, possibly as a miniseries, possibly with Charlize Theron. Rand's books still sell more than half a million copies a year. Her ideas have swirled below the surface of conservative thought for half a century, but now the particulars of our moment--the economic predicament, the Democratic control of government--have drawn them suddenly to the foreground.

II.

Rand's early life mirrored the experience of her most devoted readers. A bright but socially awkward woman, she harbored the suspicion early on that her intellectual gifts caused classmates to shun her. She was born Alissa Rosenbaum in 1905 in St. Petersburg. Her Russian-Jewish family faced severe state discrimination, first for being Jewish under the czars, and then for being wealthy merchants under the Bolsheviks, who stole her family's home and business for the alleged benefit of the people.

Anne C. Heller, in her skillful life of Rand, traces the roots of Rand's philosophy to an even earlier age. (Heller paints a more detailed and engaging portrait of Rand's interior life, while Burns more thoroughly analyzes her ideas.) Around the age of five, Alissa Rosenbaum's mother instructed her to put away some of her toys for a year. She offered up her favorite possessions, thinking of the joy that she would feel when she got them back after a long wait. When the year had passed, she asked her mother for the toys, only to be told she had given them away to an orphanage. Heller remarks that "this may have been Rand's first encounter with injustice masquerading as what she would later acidly call altruism." (The anti-government activist Grover Norquist has told a similar story from childhood, in which his father would steal bites of his ice cream cone, labelling each bite "sales tax" or "income tax." The psychological link between a certain form of childhood deprivation and extreme libertarianism awaits serious study.)

Rosenbaum dreamed of fame as a novelist and a scriptwriter, and fled to the United States in 1926, at the age of twenty-one. There she adopted her new name, for reasons that remain unclear. Rand found relatives to support her temporarily in Chicago, before making her way to Hollywood. Her timing was perfect: the industry was booming, and she happened to have a chance encounter with the director Cecil B. DeMille--who, amazingly, gave a script-reading job to the young immigrant who had not yet quite mastered the English language. Rand used her perch as a launching pad for a career as a writer for the stage and the screen.

Rands political philosophy remained amorphous in her early years. Aside from a revulsion at communism, her primary influence was Nietzsche, whose exaltation of the superior individual spoke to her personally. She wrote of one of the protagonists of her stories that "he does not understand, because he has no organ for understanding, the necessity, meaning, or importance of other people"; and she meant this as praise. Her political worldview began to crystallize during the New Deal, which she immediately interpreted as a straight imitation of Bolshevism. Rand threw herself into advocacy for Wendell Wilkie, the Republican presidential nominee in 1940, and after Wilkies defeat she bitterly predicted "a Totalitarian America, a world of slavery, of starvation, of concentration camps and of firing squads." Her campaign work brought her into closer contact with conservative intellectuals and pro-business organizations, and helped to refine her generalized anti-communist and crudely Nietzschean worldview into a moral defense of the individual will and unrestrained capitalism.

Rand expressed her philosophy primarily through two massive novels: The Fountainhead, which appeared in 1943, and Atlas Shrugged, which appeared in 1957. Both tomes, each a runaway best-seller, portrayed the struggle of a brilliant and ferociously individualistic man punished for his virtues by the weak-minded masses. It was Atlas Shrugged that Rand deemed the apogee of her lifes work and the definitive statement of her philosophy. She believed that the principle of trade governed all human relationships--that in a free market one earned money only by creating value for others. Hence, ones value to society could be measured by his income. History largely consisted of "looters and moochers" stealing from societys productive elements.

In essence, Rand advocated an inverted Marxism. In the Marxist analysis, workers produce all value, and capitalists merely leech off their labor. Rand posited the opposite. In Atlas Shrugged, her hero, John Galt, leads a capitalist strike, in which the brilliant business leaders who drive all progress decide that they will no longer tolerate the parasitic workers exploiting their talent, and so they withdraw from society to create their own capitalistic paradise free of the ungrateful, incompetent masses. Galt articulates Rands philosophy:

The bifurcated class analysis did not end the similarities between Rands worldview and Marxism. Rands Russian youth imprinted upon her a belief in the polemical influence of fiction. She once wrote to a friend that "its time we realize--as the Reds do--that spreading our ideas in the form of fiction is a great weapon, because it arouses the public to an emotional, as well as intellectual response to our cause." She worked both to propagate her own views and to eliminate opposing views. In 1947 she testified before the House Un-American Activities Committee, arguing that the film Song of Russia, a paean to the Soviet Union made in 1944, represented communist propaganda rather than propaganda for World War II, which is what it really supported. (Rand, like most rightists of her day, opposed American entry into the war.)

In 1950, Rand wrote the influential Screen Guide for Americans, the Motion Picture Alliances industry guidebook for avoiding subtle communist influence in its films. The directives, which neatly summarize Rands worldview, included such categories as "Dont Smear The Free Enterprise System," "Dont Smear Industrialists" ("it is they who created the opportunities for achieving the unprecedented material wealth of the industrial age"), "Dont Smear Wealth," and "Dont Deify The Common Man" ("if anyone is classified as common--he can be called common only in regard to his personal qualities. It then means that he has no outstanding abilities, no outstanding virtues, no outstanding intelligence. Is that an object of glorification?"). Like her old idol Nietzsche, she denounced a transvaluation of values according to which the strong had been made weak and the weak were praised as the strong.

Rands hotly pro-capitalist novels oddly mirrored the Socialist Realist style, with two-dimensional characters serving as ideological props. Burns notes some of the horrifying implications of Atlas Shrugged. "In one scene," she reports, "[Rand] describes in careful detail the characteristics of passengers doomed to perish in a violent railroad clash, making it clear their deaths are warranted by their ideological errors." The subculture that formed around her--a cult of the personality if ever there was one--likewise came to resemble a Soviet state in miniature. Beginning with the publication of The Fountainhead, Rand began to attract worshipful followers. She cultivated these (mostly) young people interested in her work, and as her fame grew she spent less time engaged in any way with the outside world, and increasingly surrounded herself with her acolytes, who communicated in concepts and terms that the outside world could not comprehend.

Rand called her doctrine "Objectivism," and it eventually expanded well beyond politics and economics to psychology, culture, science (she considered the entire field of physics "corrupt"), and sundry other fields. Objectivism was premised on the absolute centrality of logic to all human endeavors. Emotion and taste had no place. When Rand condemned a piece of literature, art, or music (she favored Romantic Russian melodies from her youth and detested Bach, Mozart, Beethoven, and Brahms), her followers adopted the judgment. Since Rand disliked facial hair, her admirers went clean-shaven. When she bought a new dining room table, several of them rushed to find the same model for themselves.

Rands most important acolyte was Nathan Blumenthal, who first met her as a student infatuated with The Fountainhead. Blumenthal was born in Canada in 1930. In 1949 he wrote to Rand, and began to visit her extensively, and fell under her spell. He eventually changed his name to Nathaniel Branden, signifying in the ancient manner of all converts that he had repudiated his old self and was reborn in the image of Rand, from whom he adapted his new surname. She designated Branden as her intellectual heir.

She allowed him to run the Nathaniel Branden Institute, a small society dedicated to promoting Objectivism through lectures, therapy sessions, and social activities. The courses, he later wrote, began with the premises that "Ayn Rand is the greatest human being who has ever lived" and "Atlas Shrugged is the greatest human achievement in the history of the world." Rand also presided over a more select circle of followers in meetings every Saturday night, invitations to which were highly coveted among the Objectivist faithful. These meetings themselves were frequently ruthless cult-like exercises, with Rand singling out members one at a time for various personality failings, subjecting them to therapy by herself or Branden, or expelling them from the charmed circle altogether.

So strong was the organizations hold on its members that even those completely excommunicated often maintained their faith. In 1967, for example, the journalist Edith Efron was, in Hellers account, "tried in absentia and purged, for gossiping, or lying, or refusing to lie, or flirting; surviving witnesses couldnt agree on exactly what she did." Upon her expulsion, Efron wrote to Rand that "I fully and profoundly agree with the moral judgment you have made of me, and with the action you have taken to end social relations." One of the Institutes therapists counseled Efrons eighteen-year-old son, also an Objectivist, to cut all ties with his mother, and made him feel unwelcome in the group when he refused to do so. (Efrons brother, another Objectivist, did temporarily disown her.)

Sex and romance loomed unusually large in Rands worldview. Objectivism taught that intellectual parity is the sole legitimate basis for romantic or sexual attraction. Coincidentally enough, this doctrine cleared the way for Rand--a woman possessed of looks that could be charitably described as unusual, along with abysmal personal hygiene and grooming habits--to seduce young men in her orbit. Rand not only persuaded Branden, who was twenty-five years her junior, to undertake a long-term sexual relationship with her, she also persuaded both her husband and Brandens wife to consent to this arrangement. (They had no rational basis on which to object, she argued.) But she prudently instructed them to keep the affair secret from the other members of the Objectivist inner circle.

At some point, inevitably, the arrangement began to go very badly. Brandens wife began to break down--Rand diagnosed her with "emotionalism," never imagining that her sexual adventures might have contributed to the young womans distraught state. Branden himself found the affair ever more burdensome and grew emotionally and sexually withdrawn from Rand. At one point Branden suggested to Rand that a second affair with another woman closer to his age might revive his lust. Alas, Rand--whose intellectual adjudications once again eerily tracked her self-interest--determined that doing so would "destroy his mind." He would have to remain with her. Eventually Branden confessed to Rand that he could no longer muster any sexual attraction for her, and later that he actually had undertaken an affair with another woman despite Rands denying him permission. After raging at Branden, Rand excommunicated him fully. The two agreed not to divulge their affair. Branden told his followers only that he had "betrayed the principles of Objectivism" in an "unforgiveable" manner and renounced his role within the organization.

Rands inner circle turned quickly and viciously on their former superior. Alan Greenspan, a cherished Rand confidant, signed a letter eschewing any future contact with Branden or his wife. Objectivist students were forced to sign loyalty oaths, which included the promise never to contact Branden, or to buy his forthcoming book or any future books that he might write. Rands loyalists expelled those who refused these orders, and also expelled anyone who complained about the tactics used against dissidents. Some of the expelled students, desperate to retain their lifeline to their guru, used pseudonyms to re-enroll in the courses or re-subscribe to her newsletter. But many just drifted away, and over time the Rand cult dwindled to a hardened few.

III.

Ultimately the Objectivist movement failed for the same reason that communism failed: it tried to make its people live by the dictates of a totalizing ideology that failed to honor the realities of human existence. Rands movement devolved into a corrupt and cruel parody of itself. She herself never won sustained personal influence within mainstream conservatism or the Republican Party. Her ideological purity and her unstable personality prevented her from forming lasting coalitions with anybody who disagreed with any element of her catechism.

Moreover, her fierce attacks on religion--she derided Christianity, again in a Nietzschean manner, as a religion celebrating victimhood--made her politically radioactive on the right. The Goldwater campaign in 1964 echoed distinctly Randian themes--"profits," the candidate proclaimed, "are the surest sign of responsible behavior"--but he ignored Rands overtures to serve as his intellectual guru. He was troubled by her atheism. In an essay in National Review ten years after the publication of Atlas Shrugged, M. Stanton Evans summarized the conservative view on Rand. She "has an excellent grasp of the way capitalism is supposed to work, the efficiencies of free enterprise, the central role of private property and the profit motive, the social and political costs of welfare schemes which seek to compel a false benevolence," he wrote, but unfortunately she rejects "the Christian culture which has given birth to all our freedoms."

The idiosyncracies of Objectivism never extended beyond the Rand cult, though it was a large cult with influential members--and yet her central contribution to right-wing thought has retained enormous influence. That contribution was to express the opposition to economic redistribution in moral terms, as a moral depravity. A long and deep strand of classical liberal thought, stretching back to Locke, placed the individual in sole possession of his own economic destiny. The political scientist C.B. MacPherson called this idea "possessive individualism," or "making the individual the sole proprietor of his own person and capacities, owing nothing to society for them." The theory of possessive individualism came under attack in the Marxist tradition, but until the era of the New Deal it was generally accepted as a more or less accurate depiction of the actual social and economic order. But beginning in the mid-1930s, and continuing into the postwar years, American society saw widespread transfers of wealth from the rich to the poor and the middle class. In this context, the theory of possessive individualism could easily evolve into a complaint against the exploitation of the rich. Rand pioneered this leap of logic--the ideological pity of the rich for the oppression that they suffer as a class.

There was more to Rands appeal. In the wake of a depression that undermined the prestige of business, and then a postwar economy that was characterized by the impersonal corporation, her revival of the capitalist as a romantic hero, even a superhuman figure, naturally flattered the business elite. Here was a woman saying what so many of them understood instinctively. "For twenty-five years," gushed a steel executive to Rand, "I have been yelling my head off about the little-realized fact that eggheads, socialists, communists, professors, and so-called liberals do not understand how goods are produced. Even the men who work at the machines do not understand it." Rand, finally, restored the boss to his rightful mythic place.

On top of all these philosophical compliments to success and business, Rand tapped into a latent elitism that had fallen into political disrepute but never disappeared from the economic right. Ludwig von Mises once enthused to Rand, "You have the courage to tell the masses what no politician told them: you are inferior and all the improvements in your condition which you simply take for granted you owe to the effort of men who are better than you." Rand articulated the terror that conservatives felt at the rapid leveling of incomes in that era--their sense of being singled out by a raging mob. She depicted the world in apocalyptic terms. Even slow encroachments of the welfare state, such as the minimum wage or public housing, struck her as totalitarian. She lashed out at John Kennedy in a polemical nonfiction tome entitled The Fascist New Frontier, anticipating by several decades Jonah Goldbergs equally wild Liberal Fascism.

Rands most enduring accomplishment was to infuse laissez-faire economics with the sort of moralistic passion that had once been found only on the left. Prior to Rands time, two theories undergirded economic conservatism. The first was Social Darwinism, the notion that the advancement of the human race, like other natural species, relied on the propagation of successful traits from one generation to the next, and that the free market served as the equivalent of natural selection, in which government interference would retard progress. The second was neoclassical economics, which, in its most simplistic form, described the marketplace as a perfectly self-correcting instrument. These two theories had in common a practical quality. They described a laissez-faire system that worked to the benefit of all, and warned that intervention would bring harmful consequences. But Rand, by contrast, argued for laissez-faire capitalism as an ethical system. She did believe that the rich pulled forward society for the benefit of one and all, but beyond that, she portrayed the act of taxing the rich to aid the poor as a moral offense.

Countless conservatives and libertarians have adopted this premise as an ideological foundation for the promotion of their own interests. They may believe the consequentialist arguments against redistribution--that Bill Clintons move to render the tax code slightly more progressive would induce economic calamity, or that George W. Bushs making the tax code somewhat less progressive would usher in a boom; but the utter failure of those predictions to come to pass provoked no re-thinking whatever on the economic right. For it harbored a deeper belief in the immorality of redistribution, a righteous sense that the federal tax code and budget represent a form of organized looting aimed at societys most virtuous--and this sense, which remains unshakeable, was owed in good measure to Ayn Rand.

The economic right may believe religiously in their moral view of wealth, but we do not have to respect it as we might respect religious faith. For it does not transcend--perhaps no religion should transcend--empirical scrutiny. On the contrary, this conservative view, the Randian inversion of the Marxist worldview, rests upon a series of propositions that can be falsified by data.

Let us begin with the premise that wealth represents a sign of personal virtue--thrift, hard work, and the rest--and poverty the lack thereof. Many Republicans consider the link between income and the work ethic so self-evident that they use the terms "rich" and "hard-working" interchangeably, and likewise "poor" and "lazy." The conservative pundit Dick Morris accuses Obama of "rewarding failure and penalizing hard work" through his tax plan. His comrade Bill OReilly complains that progressive taxation benefits "folks who dropped out of school, who are too lazy to hold a job, who smoke reefers 24/7."

A related complaint against redistribution holds that the rich earn their higher pay because of their nonstop devotion to office work--a grueling marathon of meetings and emails that makes the working life of the typical nine-to-five middle-class drone a vacation by comparison. "People just dont get it. Im attached to my BlackBerry," complained one Wall Streeter to Sherman. "I get calls at two in the morning, when the market moves. That costs money.

Now, it is certainly true that working hard can increase ones chances of growing rich. It does not necessarily follow, however, that the rich work harder than the poor. Indeed, there are many ways in which the poor work harder than the rich. As the economist Daniel Hamermesh discovered, low-income workers are more likely to work the night shift and more prone to suffering workplace injuries than high-income workers. White-collar workers put in those longer hours because their jobs are not physically exhausting. Few titans of finance would care to trade their fifteen-hour day sitting in a mesh chair working out complex problems behind a computer for an eight-hour day on their feet behind a sales counter.

For conservatives, the causal connection between virtue and success is not merely ideological, it is also deeply personal. It forms the basis of their admiration of themselves. If you ask a rich person whether he ascribes his success to good fortune or his own merit, the answer will probably tell you whether that person inhabits the economic left or the economic right. Rand held up her own meteoric rise from penniless immigrant to wealthy author as a case study of the individualist ethos. "No one helped me," she wrote, "nor did I think at any time that it was anyones duty to help me."

But this was false. Rand spent her first months in this country subsisting on loans from relatives in Chicago, which she promised to repay lavishly when she struck it rich. (She reneged, never speaking to her Chicago family again.) She also enjoyed the great fortune of breaking into Hollywood at the moment it was exploding in size, and of bumping into DeMille. Many writers equal to her in their talents never got the chance to develop their abilities. That was not because they were bad or delinquent people. They were merely the victims of the commonplace phenomenon that Bernard Williams described as "moral luck."

Not surprisingly, the argument that getting rich often entails a great deal of luck tends to drive conservatives to apoplexy. This spring the Cornell economist Robert Frank, writing in The New York Times, made the seemingly banal point that luck, in addition to talent and hard work, usually plays a role in an individuals success. Franks blasphemy earned him an invitation on Fox News, where he would play the role of the loony liberal spitting in the face of middle-class values. The interview offers a remarkable testament to the belligerence with which conservatives cling to the mythology of heroic capitalist individualism. As the Fox host, Stuart Varney, restated Franks outrageous claims, a voice in the studio can actually be heard laughing off-camera. Varney treated Franks argument with total incredulity, offering up ripostes such as "Thats outrageous! That is outrageous!" and "Thats nonsense! That is nonsense!" Turning the topic to his own inspiring rags-to-riches tale, Varney asked: "Do you know what risk is involved in trying to work for a major American network with a British accent?"

There seems to be something almost inherent in the right-wing psychology that drives its rich adherents to dismiss the role of luck--all the circumstances that must break right for even the most inspired entrepreneur--in their own success. They would rather be vain than grateful. So seductive do they find this mythology that they omit major episodes of their own life, or furnish themselves with preposterous explanations (such as the supposed handicap of making it in American television with a British accent--are there any Brits in this country who have not been invited to appear on television?) to tailor reality to fit the requirements of the fantasy.

The association of wealth with virtue necessarily requires the free marketer to play down the role of class. Arthur Brooks, in his book Gross National Happiness, concedes that "the gap between the richest and poorest members of society is far wider than in many other developed countries. But there is also far more opportunity ... there is in fact an amazing amount of economic mobility in America." In reality, as a study earlier this year by the Brookings Institution and Pew Charitable Trusts reported, the United States ranks near the bottom of advanced countries in its economic mobility. The study found that family background exerts a stronger influence on a persons income than even his education level. And its most striking finding revealed that you are more likely to make your way into the highest-earning one-fifth of the population if you were born into the top fifth and did not attain a college degree than if you were born into the bottom fifth and did. In other words, if you regard a college degree as a rough proxy for intelligence or hard work, then you are economically better off to be born rich, dumb, and lazy than poor, smart, and industrious.

In addition to describing the rich as "hard-working," conservatives also have the regular habit of describing them as "productive." Gregory Mankiw describes Obamas plan to make the tax code more progressive as allowing a person to "lay claim to the wealth of his more productive neighbor." In the same vein, George Will laments that progressive taxes "reduce the role of merit in the allocation of social rewards--merit as markets measure it, in terms of value added to the economy." The assumption here is that ones income level reflects ones productivity or contribution to the economy.

Is income really a measure of productivity? Of course not. Consider your own profession. Do your colleagues who demonstrate the greatest skill unfailingly earn the most money, and those with the most meager skill the least money? I certainly cannot say that of my profession. Nor do I know anybody who would say that of his own line of work. Most of us perceive a world with its share of overpaid incompetents and underpaid talents. Which is to say, we rightly reject the notion of the market as the perfect gauge of social value.

Now assume that this principle were to apply not only within a profession--that a dentist earning $200,000 a year must be contributing exactly twice as much to society as a dentist earning $100,000 a year--but also between professions. Then you are left with the assertion that Donald Trump contributes more to society than a thousand teachers, nurses, or police officers. It is Wall Street, of course, that offers the ultimate rebuttal of the assumption that the market determines social value. An enormous proportion of upper-income growth over the last twenty-five years accrued to an industry that created massive negative social value--enriching itself through the creation of a massive bubble, the deflation of which has brought about worldwide suffering.

If ones income reflects ones contribution to society, then why has the distribution of income changed so radically over the last three decades? While we ponder that question, consider a defense of inequality from the perspective of three decades ago. In 1972, Irving Kristol wrote that

Human talents and abilities, as measured, do tend to distribute themselves along a bell-shaped curve, with most people clustered around the middle, and with much smaller percentages at the lower and higher ends.... This explains one of the most extraordinary (and little-noticed) features of 20th-century societies: how relatively invulnerable the distribution of income is to the efforts of politicians and ideologues to manipulate it. In all the Western nations--the United States, Sweden, the United Kingdom, France, Germany--despite the varieties of social and economic policies of their governments, the distribution of income is strikingly similar.

So Kristol thought the bell-shaped distribution of income in the United States, and the similarly shaped distributions among our economic peers, proved that income inequality merely followed the natural inequality of human talent. As it happens, Kristol wrote that passage shortly before a boom in inequality, one that drove the income share of the highest-earning 1 percent of the population from around 8 percent (when he was writing) to 24 percent today, and which stretched the bell curve of the income distribution into a distended sloping curve with a lengthy right tail. At the same time, America has also grown vastly more unequal in comparison with the European countries cited by Kristol.

This suggests one of two possibilities. The first is that the inherent human talent of Americas economic elite has massively increased over the last generation, relative to that of the American middle class and that of the European economic elite. The second is that bargaining power, political power, and other circumstances can effect the distribution of income--which is to say, again, that ones income level is not a good indicator of a persons ability, let alone of a persons social value.

The final feature of Randian thought that has come to dominate the right is its apocalyptic thinking about redistribution. Rand taught hysteria. The expressions of terror at the "confiscation" and "looting" of wealth, and the loose talk of the rich going on strike, stands in sharp contrast to the decidedly non-Bolshevik measures that they claim to describe. The reality of the contemporary United States is that, even as income inequality has exploded, the average tax rate paid by the top 1 percent has fallen by about one-third over the last twenty-five years. Again: it has fallen. The rich have gotten unimaginably richer, and at the same time their tax burden has dropped significantly. And yet conservatives routinely describe this state of affairs as intolerably oppressive to the rich. Since the share of the national income accruing to the rich has grown faster than their average tax rate has shrunk, they have paid an ever-rising share of the federal tax burden. This is the fact that so vexes the right.

Most of the right-wing commentary purporting to prove that the rich bear the overwhelming burden of government relies upon the simple trick of citing only the income tax, which is progressive, while ignoring more regressive levies. A brief overview of the facts lends some perspective to the fears of a new Red Terror. Our government divides its functions between the federal, state, and local levels. State and local governments tend to raise revenue in ways that tax the poor at higher rates than the rich. (It is difficult for a state or a locality to maintain higher rates on the rich, who can easily move to another town or state that offers lower rates.) The federal government raises some of its revenue from progressive sources, such as the income tax, but also healthy chunks from regressive levies, such as the payroll tax.

The sum total of these taxes levies a slightly higher rate on the rich. The bottom 99 percent of taxpayers pay 29.4 percent of their income in local, state, and federal taxes. The top 1 percent pay an average total tax rate of 30.9 percent--slightly higher, but hardly the sort of punishment that ought to prompt thoughts of withdrawing from society to create a secret realm of capitalistic bermenschen. These numbers tend to bounce back and forth, depending upon which party controls the government at any given time. If Obama succeeds in enacting his tax policies, the tax burden on the rich will bump up slightly, just as it bumped down under George W. Bush.

What is so striking, and serves as the clearest mark of Rands lasting influence, is the language of moral absolutism applied by the right to these questions. Conservatives define the see-sawing of the federal tax-and-transfer system between slightly redistributive and very slightly redistributive as a culture war over capitalism, or a final battle to save the free enterprise system from the hoard of free-riders. And Obama certainly is expanding the role of the federal government, though probably less than George W. Bush did. (The Democratic health care bills would add considerably less net expenditure to the federal budget than Bushs prescription drug benefit.) The hysteria lies in the realization that Obama would make the government more redistributive--that he would steal from the virtuous (them) and give to the undeserving.

Like many other followers of Rand, John Allison of BB&T has taken to claiming vindication in the convulsive events of the past year. "Rand predicted what would happen fifty years ago, he told The New York Times. "Its a nightmare for anyone who supports individual rights." If Rand was truly right, of course, then Allison will flee his home and join his fellow supermen in some distant capitalist nirvana. So perhaps the economic crisis may bring some good after all.

Jonathan Chait is a senior editor at The New Republic.

More here:

Ayn Rand and the Invincible Cult of Selfishness on the ...

Atlas Shrugged

Published in 1957, Atlas Shrugged was Ayn Rand's last and most ambitious novel. Rand set out to explain her personal philosophy in this book, which follows a group of pioneering industrialists who go on strike against a corrupt government and a judgmental society. After completing this novel Rand turned to nonfiction and published works on her philosophy for the rest of her career. Rand actually only published four novels in her entire career, and the novel that came out before Atlas Shrugged, The Fountainhead, was published in 1943. So there was a pretty long publishing gap there.

It might seem a bit odd to use a work of fiction to make a philosophical statement, but this actually reflects Rand's view of art. Art, for her, was a way to present ideals and ideas. In other words, Rand herself admitted that her characters may not always be "believable." They are "ideal" people who represent a range of philosophies. Rand used these characters to show how her philosophy could be lived, rather than just publishing an essay about it.

Rand's personal philosophy, known as Objectivism (to read more about it, check out our Themes section) was, and remains, really controversial. Objectivism criticizes a lot of philosophies and views, ranging from Christianity to communism, and as a result it can be very polarizing. Rand herself was a devout atheist, held very open views about sex (which definitely raised some eyebrows in 1950s America), and was a staunch anti-communist. Rand's anti-communism stems from her personal history. She was born in Russia in 1905 and lived through the Bolshevik Revolution, which is when communists overthrew Russia's monarchy and took over, establishing the Soviet Union. The Revolution was a bloody affair, and the new communist government was very oppressive; as a result Rand developed a lifelong hatred of communism and violence of any sort.

Rand fled the Soviet Union in 1926 and came to America, where she quickly became a fan of American freedom, American democracy, and American capitalism, all of which greatly contrasted to the experiences she'd had in the oppressive Soviet Union. Rand's personal philosophy developed around these American ideas, in opposition to the type of life she saw in the Soviet Union.

Given that Atlas Shrugged is a statement of Rand's personal philosophy, the book expresses many of her views on religion, sex, politics, etc. When it was published, it received a lot of negative reviews. Many conservatives hated the book for its atheist views and its upfront treatment of sex. Many liberals hated the book for its celebration of capitalism. The book also confused a lot of people. But the novel sold, and it has remained popular since; it's actually never been out of print since it was first published over fifty years ago. Atlas Shrugged was kind of like one of those blockbuster movies that gets horrible reviews but still does really well at the box office. Something about this book intrigues people, whether it's the characters, the ideas, or just the mystery plot itself.

In fact, Atlas Shrugged has even seen a renewed surge in popularity lately, coinciding with the recent financial crisis. (If you want to see some of the news coverage of this, check out our "Best of the Web" section.) The book does deal with industrialists and hard financial times, so this popularity boom is not too surprising. In recent years the news media has often classed the novel as ber-conservative, which is funny, since a lot of conservatives hated the book when it first came out. At any rate it's still a very controversial book just check out the hundreds of varied reviews it has racked up on Amazon.

In an old episode of South Park, a character who reads Atlas Shrugged declares that the book ruined reading for him and that he would never read another book again. (If you want to watch this hilarious clip, head on over to the "Best of the Web" section.) There's a reason this book is so often made the butt of jokes. It's long. Crazy long. We're talking Tolstoy levels of longness. It's also a book that's about politics, philosophy, 30-something business people, and more philosophy. Frankly, this book can seem downright off-putting. Even the title is confusing.

So why should you care? Well, for one thing, putting aside all the Deep Thoughts and Profound Ideas in this book, we have a bunch of characters who are challenging the establishment. Seriously. At its core, this book is about individuals who go against the crowd, individuals bold enough to speak their minds, do their own thing, and seek their own happiness. And in trying to do so, these bold individuals face a heck of a lot of peer pressure. In fact, pretty much everyone in the whole world disapproves of these people, who are trying to make better lives for themselves by embracing things like liberty and self-esteem.

It's like high school times a billion. The world is filled with the snobby popular crowd and our intrepid band of misfit heroes is outnumbered, but never outsmarted. Turns out all that philosophy we mentioned earlier has a lot to do with all of this individualism and going against the crowd, too. Whether it's a high school cafeteria or a high-powered business meeting, some things seem to stay pretty universal. This book shows that there are always people who want to march to the beat of their own drum and who are bold enough to risk mass disapproval in order to do it. Kind of cool and inspiring really, regardless of your opinion of their particular philosophy.

Read more here:

Atlas Shrugged

Mammoth – Wikipedia

A mammoth is any species of the extinct genus Mammuthus, proboscideans commonly equipped with long, curved tusks and, in northern species, a covering of long hair. They lived from the Pliocene epoch (from around 5million years ago) into the Holocene at about 4,500 years ago[1][2] in Africa, Europe, Asia, and North America. They were members of the family Elephantidae, which also contains the two genera of modern elephants and their ancestors. Mammoths stem from an ancestral species called M. africanavus, the African mammoth. These mammoths lived in northern Africa and disappeared about 3 or 4 million years ago. Descendants of these mammoths moved north and eventually covered most of Eurasia. These were M. meridionalis, the 'southern mammoths'.[3]

The earliest known proboscideans, the clade that contains the elephants, existed about 55 million years ago around the Tethys Sea area. The closest relatives of the Proboscidea are the sirenians and the hyraxes. The family Elephantidae is known to have existed six million years ago in Africa, and includes the living elephants and the mammoths. Among many now extinct clades, the mastodon is only a distant relative of the mammoths, and part of the separate Mammutidae family, which diverged 25 million years before the mammoths evolved.[4]

The following cladogram shows the placement of the genus Mammuthus among other proboscideans, based on hyoid characteristics:[5]

Since many remains of each species of mammoth are known from several localities, it is possible to reconstruct the evolutionary history of the genus through morphological studies. Mammoth species can be identified from the number of enamel ridges on their molars; the primitive species had few ridges, and the amount increased gradually as new species evolved and replaced the former ones. At the same time, the crowns of the teeth became longer, and the skulls become higher from top to bottom and shorter from the back to the front over time to accommodate this.[6]

The first known members of the genus Mammuthus are the African species M. subplanifrons from the Pliocene and M. africanavus from the Pleistocene. The former is thought to be the ancestor of later forms. Mammoths entered Europe around 3 million years ago; the earliest known type has been named M. rumanus, which spread across Europe and China. Only its molars are known, which show it had 810 enamel ridges. A population evolved 1214 ridges and split off from and replaced the earlier type, becoming M. meridionalis. In turn, this species was replaced by the steppe mammoth, M. trogontherii, with 1820 ridges, which evolved in East Asia ca. 1 million years ago. Mammoths derived from M. trogontherii evolved molars with 26 ridges 200,000 years ago in Siberia, and became the woolly mammoth, M. primigenius.[6] The Columbian mammoth, M. columbi, evolved from a population of M. trogontherii that had entered North America. A 2011 genetic study showed that two examined specimens of the Columbian mammoth were grouped within a subclade of woolly mammoths. This suggests that the two populations interbred and produced fertile offspring. It also suggested that a North American form known as "M. jeffersonii" may be a hybrid between the two species.[7]

By the late Pleistocene, mammoths in continental Eurasia had undergone a major transformation, including a shortening and heightening of the cranium and mandible, increase in molar hypsodonty index, increase in plate number, and thinning of dental enamel. Due to this change in physical appearance, it became customary to group European mammoths separately into distinguishable clusters:

There is speculation as to what caused this variation within the three chronospecies. Variations in environment, climate change, and migration surely played roles in the evolutionary process of the mammoths. Take M. primigenius for example: Woolly mammoths lived in opened grassland biomes. The cool steppe-tundra of the Northern Hemisphere was the ideal place for mammoths to thrive because of the resources it supplied. With occasional warmings during the ice age, climate would change the landscape, and resources available to the mammoths altered accordingly.[6][8][9]

The word mammoth was first used in Europe during the early 1600s, when referring to maimanto tusks discovered in Siberia.[10] John Bell,[11] who was on the Ob River in 1722, said that mammoth tusks were well known in the area. They were called "mammon's horn" and were often found in washed-out river banks. Some local people claimed to have seen a living mammoth, but they only came out at night and always disappeared under water when detected. He bought one and presented it to Hans Sloan who pronounced it an elephant's tooth.

The folklore of some native peoples of Siberia, who would routinely find mammoth bones, and sometimes frozen mammoth bodies, in eroding river banks, had various interesting explanations for these finds. Among the Khanty people of the Irtysh River basin, a belief existed that the mammoth was some kind of a water spirit. According to other Khanty, the mammoth was a creature that lived underground, burrowing its tunnels as it went, and would die if it accidentally came to the surface.[12] The concept of the mammoth as an underground creature was known to the Chinese, who received some mammoth ivory from the Siberian natives; accordingly, the creature was known in China as yn sh , "the hidden rodent".[13]

Thomas Jefferson, who famously had a keen interest in paleontology, is partially responsible for transforming the word mammoth from a noun describing the prehistoric elephant to an adjective describing anything of surprisingly large size. The first recorded use of the word as an adjective was in a description of a large wheel of cheese (the "Cheshire Mammoth Cheese") given to Jefferson in 1802.[14]

Like their modern relatives, mammoths were quite large. The largest known species reached heights in the region of 4m (13ft) at the shoulder and weights of up to 8 tonnes (8.8 short tons), while exceptionally large males may have exceeded 12 tonnes (13 short tons). However, most species of mammoth were only about as large as a modern Asian elephant (which are about 2.5 m to 3 m high at the shoulder, and rarely exceeding 5 tonnes). Both sexes bore tusks. A first, small set appeared at about the age of six months, and these were replaced at about 18 months by the permanent set. Growth of the permanent set was at a rate of about 2.5 to 15.2cm (1 to 6in) per year.[15]

Based on studies of their close relatives, the modern elephants, mammoths probably had a gestation period of 22 months, resulting in a single calf being born. Their social structure was probably the same as that of African and Asian elephants, with females living in herds headed by a matriarch, whilst bulls lived solitary lives or formed loose groups after sexual maturity.[16]

Scientists discovered and studied the remains of a mammoth calf, and found that fat greatly influenced its form, and enabled it to store large amounts of nutrients necessary for survival in temperatures as low as 50C (58F).[17] The fat also allowed the mammoths to increase their muscle mass, allowing the mammoths to fight against enemies and live longer.[18]

Depending on the species or race of mammoth, the diet differed somewhat depending on location, although all mammoths ate similar things. For the Columbian mammoth, M. columbi, the diet was mainly grazing. American Columbian mammoths fed primarily on cacti leaves, trees, and shrubs. These assumptions were based on mammoth feces and mammoth teeth. Mammoths, like modern day elephants, have hypsodont molars. These features also allowed mammoths to live an expansive life because of the availability of grasses and trees.[19]

For the Mongochen mammoth, its diet consisted of herbs, grasses, larch, and shrubs, and possibly alder. These inferences were made through the observation of mammoth feces, which scientists observed contained non-arboreal pollen and moss spores.[20]

European mammoths had a major diet of C3 carbon fixation plants. This was determined by examining the isotopic data from the European mammoth teeth.[21]

The Yamal baby mammoth Lyuba, found in 2007 in the Yamal Peninsula in Western Siberia, suggests that baby mammoths, as do modern baby elephants, ate the dung of adult animals. The evidence to show this is that the dentition (teeth) of the baby mammoth had not yet fully developed to chew grass. Furthermore, there was an abundance of ascospores of coprophilous fungi from the pollen spectrum of the baby's mother. Coprophilous fungi are fungi that grow on animal dung and disperse spores in nearby vegetation, which the baby mammoth would then consume. Spores might have gotten into its stomach while grazing for the first few times. Coprophagy may be an adaptation, serving to populate the infant's gut with the needed microbiome for digestion.

Mammoths alive in the Arctic during the Last Glacial Maximum consumed mainly forbs, such as Artemisia; graminoids were only a minor part of their diet.[22]

The woolly mammoth (M. primigenius) was the last species of the genus. Most populations of the woolly mammoth in North America and Eurasia, as well as all the Columbian mammoths (M. columbi) in North America, died out around the time of the last glacial retreat, as part of a mass extinction of megafauna in northern Eurasia and the Americas. Until recently, the last woolly mammoths were generally assumed to have vanished from Europe and southern Siberia about 12,000 years ago, but new findings show some were still present there about 10,000 years ago. Slightly later, the woolly mammoths also disappeared from continental northern Siberia.[23] A small population survived on St. Paul Island, Alaska, up until 3750BC,[2][24][25] and the small[26] mammoths of Wrangel Island survived until 1650BC.[27][28] Recent research of sediments in Alaska indicates mammoths survived on the American mainland until 10,000 years ago.[29]

A definitive explanation for their extinction has yet to be agreed. The warming trend (Holocene) that occurred 12,000 years ago, accompanied by a glacial retreat and rising sea levels, has been suggested as a contributing factor. Forests replaced open woodlands and grasslands across the continent. The available habitat would have been reduced for some megafaunal species, such as the mammoth. However, such climate changes were nothing new; numerous very similar warming episodes had occurred previously within the ice age of the last several million years without producing comparable megafaunal extinctions, so climate alone is unlikely to have played a decisive role.[30][31] The spread of advanced human hunters through northern Eurasia and the Americas around the time of the extinctions, however, was a new development, and thus might have contributed significantly.[30][31]

Whether the general mammoth population died out for climatic reasons or due to overhunting by humans is controversial.[32] During the transition from the Late Pleistocene epoch to the Holocene epoch, there was shrinkage of the distribution of the mammoth because progressive warming at the end of the Pleistocene epoch changed the mammoth's environment. The mammoth steppe was a periglacial landscape with rich herb and grass vegetation that disappeared along with the mammoth because of environmental changes in the climate. Mammoths had moved to isolated spots in Eurasia, where they disappeared completely. Also, it is thought that Late Paleolithic and Mesolithic human hunters might have affected the size of the last mammoth populations in Europe.[citation needed] There is evidence to suggest that humans did cause the mammoth extinction, although there is no definitive proof. It was found that humans living south of a mammoth steppe learned to adapt themselves to the harsher climates north of the steppe, where mammoths resided. It was concluded that if humans could survive the harsh north climate of that particular mammoth steppe then it was possible humans could hunt (and eventually extinguish) mammoths everywhere. Another hypothesis suggests mammoths fell victim to an infectious disease. A combination of climate change and hunting by humans may be a possible explanation for their extinction. Homo erectus is known to have consumed mammoth meat as early as 1.8 million years ago,[33] though this may mean only successful scavenging, rather than actual hunting. Later humans show greater evidence for hunting mammoths; mammoth bones at a 50,000-year-old site in South Britain suggest that Neanderthals butchered the animals,[34] while various sites in Eastern Europe dating from 15,000 to 44,000 years old suggest humans (probably Homo sapiens) built dwellings using mammoth bones (the age of some of the earlier structures suggests that Neanderthals began the practice).[35] However, the American Institute of Biological Sciences also notes bones of dead elephants, left on the ground and subsequently trampled by other elephants, tend to bear marks resembling butchery marks, which have previously been misinterpreted as such by archaeologists.[citation needed]

Many hypotheses also seek to explain the regional extinction of mammoths in specific areas. Scientists have speculated that the mammoths of Saint Paul Island, an isolated enclave where mammoths survived until about 8,000 years ago, died out as the island shrank by 8090% when sea levels rose, eventually making it too small to support a viable population.[36] Similarly, genome sequences of the Wrangel Island mammoths indicate a sharp decline in genetic diversity, though the extent to which this played a role in their extinction is still unclear.[37] Another hypothesis, said to be the cause of mammoth extinction in Siberia, comes from the idea that many may have drowned. While traveling to the Northern River, many of these mammoths broke through the ice and drowned. This also explains bones remains in the Arctic Coast and islands of the New Siberian Group.[citation needed]

Dwarfing occurred with the pygmy mammoth on the outer Channel Islands of California, but at an earlier period. Those animals were very likely killed by early Paleo-Native Americans, and habitat loss caused by a rising sea level that split Santa Rosae into the outer Channel Islands.[38]

The use of preserved genetic material to create living mammoth specimens, particularly in regard to the woolly mammoth, has long been discussed theoretically but has only recently become the subject of formal effort. As of 2015, there are three major ongoing projects, one led by Akira Iritani of Japan, another by Hwang Woo-suk of South Korea, and the Long Now Foundation,[39][40] attempting to create a mammoth-elephant hybrid.[41] An estimated 150 million mammoths are buried in the Siberian tundra.[42]

In April 2015, Swedish scientists published the complete genome (complete DNA sequence) of the woolly mammoth.[43] Meanwhile, a Harvard University team is already attempting to study the animals' characteristics by inserting some mammoth genes into Asian elephant stem cells.[44] So far, the team placed mammoth genes involved in blood, fat and hair into elephant stem cells in order to study the effects of these genes in laboratory cultured cells. It is still unknown if the actual cloning of a living woolly mammoth is possible.[44]

The projects are based on finding suitable mammoth DNA in frozen bodies, sequencing its genome and, if possible, gradually combining the DNA with elephant cells.[39][40][45][46] If the cells turn viable in laboratory tests, the next challenge would be creating a viable "mammoth" hybrid embryo by inseminating an elephant egg in vitro. The percent mammoth contribution to the genome would be gradually increased on each hybrid embryo produced in vitro. If a viable hybrid embryo is obtained, it may be possible to implant it into a female Asian elephant housed in a zoo.[39] With the current knowledge and technology, it is still unlikely that the hybrid embryo would be carried through the two-year gestation.[47]

The dictionary definition of mammoth at Wiktionary

See the rest here:

Mammoth - Wikipedia

Virtual Reality

Definition: Virtual reality has been notoriously difficult to define over the years. Many people take "virtual" to mean fake or unreal, and "reality" to refer to the real world. This results in an oxymoron. The actual definition of virtual, however, is "to have the effect of being such without actually being such". The definition of "reality" is "the property of being real", and one of the definitions of "real" is "to have concrete existence". Using these definitions "virtual reality" means "to have the effect of concrete existence without actually having concrete existence", which is exactly the effect achieved in a good virtual reality system. There is no requirement that the virtual environment match the real world. Inspired by these considerations, for the virtual windtunnel we adapt the following definition: Virtual reality is the use of computer technology to create the effect of an interactive three-dimensional world in which the objects have a sense of spatial presence. In this definition, "spatial presence" means that the objects in the environment effectively have a location in three-dimensional space relative to and independent of your position. Note that this is an effect, not an illusion. The basic idea is to present the correct cues to your perceptual and cognitive system so that your brain interprets those cues as objects "out there" in the three-dimensional world. These cues have been surprisingly simple to provide using computer graphics: simply render a three-dimensional object (in stereo) from a point of view which matches the positions of your eyes as you move about. If the objects in the environment interact with you then the effect of spatial presence is greatly heightened. Note also that we do not require that the virtual reality experience be "immersive". While for some applications the sense of immersion is highly desirable, we do not feel that it is required for virtual reality. The main point of virtual reality, and the primary difference between conventional three-dimensional computer graphics and virtual reality is that in virtual reality you are working with Things as opposed to Pictures of Things. Requirements: The primary requirement of virtual reality is that the scene be re-rendered from your current point of view as you move about. The frame rate at which the scene must be re-rendered depends on the application. For applications like the virtual windtunnel, it turns out that a minimum frame rate of 10 frames per second is enough to support the sense of spatial presence. While motion at this frame rate is clearly discontinuous, if properly done our cognitive systems will interpret the resulting images as three-dimensional objects "out there".

The other requirement is that interactive objects in the environment continuously respond to your commands after only a small delay. Just how long a delay can be tolerated depends on the application, but for applications like the virtual windtunnel delays of up to about a tenth of a second can be allowed. Longer delays result in a significantly degraded ability to control objects in the virtual environment.

We summarize the Virtual Reality Performance Requirements:

For more information on VR see the papers found on Steve Bryson's home page.

See original here:

Virtual Reality

Computer Business Review – Computer Business Review

Global information technology research and communications analysis for the business world.

Computer Business Review magazine and the CBRonline.com web site provide the most targeted offline and online platforms to reach Europe's business technology elite.

Computer Business Review magazine was launched in 1993 with the aim of bridging the gap between the traditional technical IT press and the business press sectors. Computer Business Review is now widely regarded throughout Europe as The Economist of the IT industry.

Computer Business Review magazine and CBRonline.com are part of Progressive Trade Media, a leading publishing and research company.

CBRonline.com is a quality technology website, delivering a wide variety of daily news, reports and analysis on the global technology industry. The website delivers a wide range of content which is updated throughout every business day, attracting users from the corporate technology market.

Whether planning an integrated campaign with print media, or solely targeting an online audience, Computer Business Review magazine and CBRonline.com are able to offer you market-leading opportunities to reach your target audience.

View original post here:

Computer Business Review - Computer Business Review

Technology News – The New York Times

Latest Articles

Mike Isaac live-tweeted Mark Zuckerbergs testimony at a federal court last week until it almost got him booted from the courtroom.

By MIKE ISAAC

On its own, the debacle of Samsungs exploding smartphones was bad. What it seems to say about the state of South Korean industry may be worse.

By QUENTIN HARDY

The Swedish authorities arrested three men on suspicion of rape and urged people with access to images showing the episode to make them available to the police.

The internet company reported positive numbers in its most recent quarterly report, but it is still dealing with the aftermath of two major data breaches.

By VINDU GOEL

A reader asks about the now-ubiquitous cloud. Quentin Hardy, The Timess deputy technology editor, considers the question.

By QUENTIN HARDY

Snapchat is known for its fun and ephemeral messaging service, but what its recent move shows is that it wants to rule the trust industry.

By QUENTIN HARDY

Sprint bought a third of the service, which has struggled in a field dominated by streaming giants like Apple Music, Spotify and Pandora.

By BEN SISARIO

A countrywide, top-down corporate culture stifles South Korean innovation and may have contributed to the companys problems, critics say.

By CHOE SANG-HUN and PAUL MOZUR

The company said it would form an outside advisory group and focus on quality assurance but offered little insight into the breakdowns that caused it to fail to identify the phones problems.

By PAUL MOZUR

The airline did not describe the source of the problem, which forced the grounding of domestic flights for two and a half hours, but said it was not the result of a hack.

By NIRAJ CHOKSHI

As A.I. applications become more sophisticated, the music that companies like Jukedeck produce has started wading into the commercial domain of actual musicians.

By ALEX MARSHALL

The Taiwanese company, which makes iPhones for Apple, has plans for a $7 billion American investment.

By REUTERS

A review of important developments in the tech industry.

By MIKE ISAAC and SAPNA MAHESHWARI

The chip makers technology is at heart of the smartphone revolution. But as the companys influence grows, it is gaining unwanted antitrust attention.

By QUENTIN HARDY

Amit Singhal, a 15-year Google veteran, said he was joining Uber, a coup for a company that has publicly stated its intention to chase Google in autonomous-vehicle research.

By MIKE ISAAC and DAISUKE WAKABAYASHI

Most smartphones can store a visual history of your travels based on your GPS data, but you can turn it off.

How to keep up with the events on television, online and on mobile devices, and a security to-do list if you are there.

By PUI-WING TAM

The quirkily personal Instagram accounts of taste-making specialists have become the soft power of todays traders.

By SCOTT REYBURN

The highway agency found that while Teslas Autopilot feature didnt prevent a crash in Florida, the system performed as it was intended.

By NEAL E. BOUDETTE

Gov. Andrew Cuomos proposed budget is now calling the embattled initiative, which created just 408 jobs in two years, the Excelsior Business Program.

By VIVIAN YEE

Mike Isaac live-tweeted Mark Zuckerbergs testimony at a federal court last week until it almost got him booted from the courtroom.

By MIKE ISAAC

On its own, the debacle of Samsungs exploding smartphones was bad. What it seems to say about the state of South Korean industry may be worse.

By QUENTIN HARDY

The Swedish authorities arrested three men on suspicion of rape and urged people with access to images showing the episode to make them available to the police.

The internet company reported positive numbers in its most recent quarterly report, but it is still dealing with the aftermath of two major data breaches.

By VINDU GOEL

A reader asks about the now-ubiquitous cloud. Quentin Hardy, The Timess deputy technology editor, considers the question.

By QUENTIN HARDY

Snapchat is known for its fun and ephemeral messaging service, but what its recent move shows is that it wants to rule the trust industry.

By QUENTIN HARDY

Sprint bought a third of the service, which has struggled in a field dominated by streaming giants like Apple Music, Spotify and Pandora.

By BEN SISARIO

A countrywide, top-down corporate culture stifles South Korean innovation and may have contributed to the companys problems, critics say.

By CHOE SANG-HUN and PAUL MOZUR

The company said it would form an outside advisory group and focus on quality assurance but offered little insight into the breakdowns that caused it to fail to identify the phones problems.

By PAUL MOZUR

The airline did not describe the source of the problem, which forced the grounding of domestic flights for two and a half hours, but said it was not the result of a hack.

By NIRAJ CHOKSHI

As A.I. applications become more sophisticated, the music that companies like Jukedeck produce has started wading into the commercial domain of actual musicians.

By ALEX MARSHALL

The Taiwanese company, which makes iPhones for Apple, has plans for a $7 billion American investment.

By REUTERS

A review of important developments in the tech industry.

By MIKE ISAAC and SAPNA MAHESHWARI

The chip makers technology is at heart of the smartphone revolution. But as the companys influence grows, it is gaining unwanted antitrust attention.

By QUENTIN HARDY

Amit Singhal, a 15-year Google veteran, said he was joining Uber, a coup for a company that has publicly stated its intention to chase Google in autonomous-vehicle research.

By MIKE ISAAC and DAISUKE WAKABAYASHI

Most smartphones can store a visual history of your travels based on your GPS data, but you can turn it off.

How to keep up with the events on television, online and on mobile devices, and a security to-do list if you are there.

By PUI-WING TAM

The quirkily personal Instagram accounts of taste-making specialists have become the soft power of todays traders.

By SCOTT REYBURN

The highway agency found that while Teslas Autopilot feature didnt prevent a crash in Florida, the system performed as it was intended.

By NEAL E. BOUDETTE

Gov. Andrew Cuomos proposed budget is now calling the embattled initiative, which created just 408 jobs in two years, the Excelsior Business Program.

By VIVIAN YEE

See the article here:

Technology News - The New York Times

An interview with Zoltan Istvan, leader of the Transhumanist …

ExtremeTech has never been particularly interested inpolitics. That being said, as the focus of politics and politicians inexorably shifts towards technology, we might just jump in the water for a dip.

Many might imagine that concerns of a more socio-political nature like who is able to accrue what particular powers or possessions, and from whom would persist independently of technological influence. Others, like the Transhumanist Party founderZoltan Istvan, might offer that socio-political issues already are, at heart, technological issues. Now seizing the day, and a rapidly expanding number of like-minded transhumanists, Istvanhas announced that he will be a contender in the 2016 US presidential race.

If you havent heard of transhumanism, or youre not quite sure what it means, I suggest you read our introductory story about transhumanism before diving into the rest of this story. In short, though, transhumanism (sometimes referred to as H+) is about improving or transforming the human condition through technology. Brain implants, genetic engineering, bionic limbs, indefinite life extension these are all examples of the topics (and elective surgeries) that a transhumanist would be interested in.

In his recentbook The Tranhumanist WagerIstvan outlines three laws:

If energetically adopted, these deceptively simple maxims ultimately compel the individual to pursue a technologically enhanced and extended life. Zoltan and other supporters of transhumanism have come to see the choice to accept or reject these principles as something far more fundamental than the choice between liberal or conservative principles. In other words, it is a more compact predictor, a simpler explanation of your worldview, motivations, and actions than any current party provides.

It is for these reasons that Zoltan has founded the Transhumanist Party and is now taking this first major step to grow it. At this point in the game, the next major step getting access to all the state ballots could prove challenging. With these ideas in mind, we present an interview with (possibly) the next US president: Zoltan Istvan.

Zoltan Istvan

Why did you decide to run for the US presidency?

Zoltan Istvan The most important goal of the Transhumanist Party and my 2016 presidential campaign is to spread awareness of transhumanism and to address the issue that society will be greatly changed by radical science and technology in the next 5-15 years. Most people are unaware how significant these changes could be. For example, we might all be getting brain implants soon, or using driverless cars, or having personal drones follow us around and do our shopping for us. Things like anonymity in the social media age, gender roles, exoskeleton suits for unfit people, ectogenesis, and the promise of immersive virtual reality could significantly change the way society views itself. Transhumanism seeks to address these issues with forward-thinking ideas, safeguards, and policies. It aims to be a bridge to a scientific and tech-dominated future, regardless what the species may eventually become.

While the Transhumanist Party has almost no chance of winning this election, its goal is to get on as many state ballots as possible, so people will see its promise and recognize what it stands for. By doing so, well let citizens know an exciting political movement is afoot that focuses on using technology and science to enhance the human species. And maybe sometime in the future, many people will want to join it. Furthermore, Im hopeful other political parties will take notice of transhumanism and incorporate its ideas into their own philosophies.

On a final note, its my hope that others will start to run for various political offices, both locally and nationally, under the Transhumanist Party banner. This way we can show the country that future politics should be far more science and technology inspired. This would be a great step for the direction of the America.

Next page: On transhumanism and religion

Read the original post:

An interview with Zoltan Istvan, leader of the Transhumanist ...