Amazon.com: Freedom: A Novel (Oprah’s Book Club …

A masterpiece of American fiction. The New York Times Book Review

Mr. Franzen has written his most deeply felt novel yet--a novel that turns out to be both a compelling biography of a dysfunctional family and an indelible portrait of our times. The New York Times

A work of total genius. New York Magazine

The Great American Novel. Esquire

One of the best living American novelists. Time

Epic. Vanity Fair

Hugely ambitious . . . Freedom is very, very good. USA Today

Brilliant . . . Epic . . . An extraordinary stylist. The Washington Post

A surprisingly moving and even hopeful epic. NPR

Sweeping and powerful. San Francisco Chronicle

Consuming and extraordinarily moving. Los Angeles Times

Immense and unforgettable. Chicago Tribune

Devastatingly insightful. The Miami Herald

A page turner that engages the mind. Newsday

It's refreshing to see a novelist who wants to engage the questions of our time in the tradition of 20th-century greats like John Steinbeck and Sinclair Lewis . . . [This] is a book you'll still be thinking about long after you've finished reading it. Associated Press

Deeply moving and superbly crafted . . . It's such a full novel, rich in description, broad in its reach and full of wry observations. Pittsburg Post-Gazette

His writing is so gorgeous . . . Franzen is one of those exceptional writers whose works define an era and a generation, and his books demand to be read. St. Louis Post-Dispatch

A tour de force . . . one of the finest novelists of his generation. The Philadelphia Inquirer

A highly readable triumph of conventional realism . . . Addictive. The National

The first Great American Novel of the post-Obama era. Telegraph (UK)

A literary genius . . . This is simply on a different plane from other contemporary fiction . . . Freedom is the novel of the year, and the century. The Guardian (UK)

A triumph . . . A pleasure to read. The New York Observer

Exhilarating . . . Gripping . . . Moving . . . On a level with The Great Gatsby [and] Gone With the Wind. Bloomberg

Visit link:

Amazon.com: Freedom: A Novel (Oprah's Book Club ...

TSLA Stock | TESLA Stock Price Today | Markets Insider

Tesla launched its IPO on June 29, 2010. Trading on the NASDAQ, Tesla offered 13.3 million shares at a price of $17 per share. It raised a total of just over $226 million.

Teslas stock price was essentially flat for several years after the 2010 IPO. There wasnt a lot going on. In 2008, the carmaker had endured a near-death experience, and in the lead-up to the IPO and afterwards, it was selling only one car, the original Roadster. The business plan at this point was for CEO Elon Musk and his team to keep the lights on long enough in order to roll out Tesla's first built-from-scratch car, the Model S sedan. Which eventually happened in 2012.

In 2013 Motor Trend named the Model S its Car of the Year. It was at this point, Teslas stock price took off. If you bought Tesla stock right after the IPO and held on, you'd be looking at an 1,000%-plus return today.

Since the sudden growth in 2013 Tesla's stock price history has been one of extreme volatility. Although a stable stock price wasn't expected or widely predicted. Investor confidence would soar, then collapse, with sentiment turning on every news event, product announcement or delay, quarterly earnings report, and market-moving tweet by Elon Musk

At one point, Musk himself said that the companys stock price was overvalued. Unlike the rest of the industry, with its slow, predictable stock price behavior for publicly traded carmakers, and with its long business cycles, Tesla was behaving more like a Silicon Valley tech company.

Stock analysts fixated on the pace of deliveries as the best indicator of how Teslas stock price was performing. Wondering if there was sufficient demand for Tesla electric cars, in a market that otherwise didn't seem to want them, to justify the monumental valuation. Eventually, Tesla began reporting quarterly sales, mainly to give the Wall Street analysts and stock investors something to go on.

In 2015, the long-awaited Model X SUV was added to the lineup, enhancing sales and giving Tesla a vehicle to use to compete in the booming crossover market. But the Model X arrived three years late, and the tremendous complexity of the car meant that Tesla spent the first half of 2016 sorting out myriad production issues.Some compensation arrived in the form of the reveal of the Model 3 mass-market vehicle. Tesla quickly racked up 373,000 pre-orders for the vehicle, at $1,000 a pop.

Despite improvements in product. Wall Street was losing the thread, however. And Teslas stock price would routinely suffer. Tesla wasn't considered very good car manufacturer in the traditional sense, consistently missing its deliveries guidance, and investors began to figure this out. Tesla's stock price volatility had briefly faded, only to return. And until the tail end of 2016, Tesla was enduring a slow stock price slide. Fortunately for Musk, the company had executed a capital raise before the skepticism set it.

However since then Teslas stock price has continued toward its all-time highs and broken $300 a share for the first time in the company's history. At first, it looked like a massive short squeeze Tesla has always been a popular stock to short. But Tesla stock steadily consolidated its gains.

Tesla has had a highly volatile stock price that has at times baffled investors. There was only one period of smooth price growth, and it gave way to a reliable pattern of volatility that preceded a massive drop.

Up until the recent rallies, it could be argued Wall Street had figured out that Tesla was a car company, not a tech company, and had reset its expectations about growth for the stock price.

The rest is here:

TSLA Stock | TESLA Stock Price Today | Markets Insider

Posthuman Ethics, Pain and Endurance | Utrecht Summer School

Closed

Organizing institution

Utrecht University - Faculty of Humanities

Period

20 August 2018 - 24 August 2018 ( 1 week )

Course location(s)

Utrecht, The Netherlands

Credits

2.0 ECTS credits

Course code

C30

Course fee (excl. housing)

300.00

Level

Advanced Master

Summer school application period is now closed

The intensive course Posthuman ethics, pain and endurance offers an overview of the contemporary debates about the ethical implications of posthumanism and the so-called posthuman turn as well as Rosi Braidottis brand of critical posthuman theory. The focus of the course this year will be on the relationship between the posthuman and the neo-materialist, vital ethics of affirmation, with special emphasis on how they deal with the cluster of issues around the lived experience of pain.

The intensive course Posthuman Ethics, Pain and Endurance offers an overview of the contemporary debates about the ethical implications of posthumanism and the so-called posthuman turn as well as Rosi Braidottis brand of critical feminist posthuman theory. The focus of the course this year will be on the relationship between the posthuman and the neo-materialist, vital ethics of affirmation, with special emphasis on how they deal with the complex issues around the lived experiences of pain, resistance, suffering and dying. Deleuze famously describes ethics as the aspiration to live an anti-fascist life. What does this mean for posthuman subjects situated between the Fourth Industrial Revolution and the Sixth Extinction? In the brutal context of the Anthropocene and climate change, of rising populism, growing poverty and inequality, how does posthuman ethics help us to deal affirmatively with these challenges?

These issues will be outlined, explored and assessed by addressing the following questions: How does a vision of the posthuman subject as a transversal an affirmative process of interaction between human, non-human and inhuman forces, help us cope with the complex and often painful challenges of the contemporary world? How does it affect the feminist quest for social justice, as well as environmental sustainability? How does it intersect with indigenous epistemologies and anti-racist politics? How does the neo-Spinozist notion of endurance foster the project of constructing an affirmative ethics for posthuman subjects? How does the idea of endurance connect to the philosophical tradition of neo-stoicism, and to Foucaults re-reading of it? How does a posthuman ethics of affirmation help us practically to confront the lived reality of pain, death and dying?

COMPULSORY READING:

The basic textbook for the course is The Posthuman Glossary (Bloomsbury Academic 2018), edited by Rosi Braidotti and Maria Hlavajova, which all participants are expected to buy.

BACKGROUND READING:

Please note that all participants are expected to have read Rosi Braidottis book The Posthuman (Polity Press 2013), and for an introduction to brutalism, the special issue of e-flux, co-edited by Rosi Braidotti, Timotheus Vermeulen et alia, which can be found here: http://www.e-flux.com/journal/83/

Prof. dr. Rosi Braidotti

Prof. Dr. Rosi Braidotti (Utrecht University)

Dr. Rick Dolphijn (Utrecht University)

Lucas van der Velden (Sonic Acts)

Simone Bignall (Flinders, University of South Australia)

This summer school course is meant for advanced MA students and up (so advanced MA students, PhD students, postdocs, professors, independent researchers, artists, ... are thus more than welcome).

300.00

Housing 200.00 , through Utrecht Summer School

For this course you are required to upload the following documents when applying:Motivation Letter, Reference Letter, C.V., Transcript of Grades

Application deadline: 16 July 2018

The rest is here:

Posthuman Ethics, Pain and Endurance | Utrecht Summer School

Satoshi Nakamoto, bitcoins enigmatic creator – Brain scan

print-edition icon Print edition | Technology QuarterlySep 1st 2018

ON PAPERor at least on the blockchainSatoshi Nakamoto is one of the richest people on the planet. Bitcoin is a semi-anonymous currency and Mr Nakamoto is a pseudonymous person, so it is hard to be sure; but he is generally reckoned to own around 1.1m bitcoin, or around 5% of the total number that will ever exist. When bitcoin hit its peak of over $19,000, that made him worth around $20bn.

But Mr Nakamoto, though actively involved with his brainchild in its early history, has been silent since 2011. An army of amateur detectives has been trying to work out who he really is, but there is frustratingly little to go on. While developing bitcoin he claimed to be male, in his late 30s and living in Japan, but even that information is suspect. There are indications that he may have lived in an American time zone, but his English occasionally contains British idioms. Some of his goldbug-like comments about central banks that debase the currency and the evils of fractional-reserve banking led early cyber-libertarian bitcoin enthusiasts to claim him as one of their own. One thing is certain: he values his privacy. To register Bitcoin.org he used Tor, an online track-covering tool used by black-marketeers, journalists and political dissidents.

Upgrade your inbox and get our Daily Dispatch and Editor's Picks.

Still, the legions of sleuths have turned up various candidates, ranging from Japanese mathematicians to Irish graduate students. In 2014 Newsweek, a business magazine, fingered Dorian Prentice Satoshi Nakamoto, an American engineer. He emphatically denied the story, and the next day a forum account previously used by Mr Nakamoto posted, for the first time in five years, to say, I am not Dorian Nakamotothough there are doubts about that account, too.

Attention also focused on Hal Finney, an expert in cryptography, an experienced programmer and a dedicated cypherpunk. He was the recipient in the first-ever transaction conducted in bitcoin, with Mr Nakamoto as the sender. He died in 2014. Andy Greenberg, a journalist, who studied private e-mails between Mr Finney and Mr Nakamoto, concluded that he was probably not bitcoins creator. And Mr Finney himself always denied that he was Mr Nakamoto.

Conversely, in 2016 Craig Wright, an Australian computer scientist, explicitly claimed that he was the man everyone was looking for. He invited several news organisations, including The Economist, to witness him prove his claim by using cryptographic keys that supposedly belonged to Mr Nakamoto. He did not convince his audience, so he said he would settle the matter by moving a bitcoin from Mr Nakamotos stash. He later decided against it when an online story suggested he could face arrest if he confirmed he was bitcoins creator, on the ground of enabling terrorism. But the story turned out to be a fake.

According to another theory, Mr Nakamoto is actually a group of people. But for now his, or their, identity remains a mystery. Some think his withdrawal was a matter of principle, to underline the point of a decentralised currency. Perhaps he simply wants a quiet life.

Read on: Dividing the cryptocurrency sheep from the blockchain goats>>

Originally posted here:

Satoshi Nakamoto, bitcoins enigmatic creator - Brain scan

Political freedom – Wikipedia

"Freedoms" redirects here. For other uses, see Freedom.

Political freedom (also known as political autonomy or political agency) is a central concept in history and political thought and one of the most important features of democratic societies.[1] Political freedom was described as freedom from oppression[2] or coercion,[3] the absence of disabling conditions for an individual and the fulfillment of enabling conditions,[4] or the absence of life conditions of compulsion, e.g. economic compulsion, in a society.[5] Although political freedom is often interpreted negatively as the freedom from unreasonable external constraints on action,[6] it can also refer to the positive exercise of rights, capacities and possibilities for action, and the exercise of social or group rights.[7] The concept can also include freedom from "internal" constraints on political action or speech (e.g. social conformity, consistency, or "inauthentic" behaviour).[8] The concept of political freedom is closely connected with the concepts of civil liberties and human rights, which in democratic societies are usually afforded legal protection from the state.

Various groups along the political spectrum naturally differ on what they believe constitutes "true" political freedom.

Left-wing political philosophy generally couples the notion of freedom with that of positive liberty, or the enabling of a group or individual to determine their own life or realize their own potential. Freedom, in this sense, may include freedom from poverty, starvation, treatable disease, and oppression, as well as freedom from force and coercion, from whomever they may issue.

Friedrich Hayek, a classical liberal, criticized this as a misconception of freedom:

[T]he use of "liberty" to describe the physical "ability to do what I want", the power to satisfy our wishes, or the extent of the choice of alternatives open to us... has been deliberately fostered as part of the socialist argument... the notion of collective power over circumstances has been substituted for that of individual liberty.[9]

Anarcho-socialists see negative and positive liberty as complementary concepts of freedom. Such a view of rights may require utilitarian trade-offs, such as sacrificing the right to the product of one's labor or freedom of association for less racial discrimination or more subsidies for housing. Social anarchists describe the negative liberty-centric view endorsed by capitalism as "selfish freedom".[10]

Anarcho-capitalists see negative rights as a consistent system. Ayn Rand described it as "a moral principle defining and sanctioning a mans freedom of action in a social context. To such libertarians, positive liberty is contradictory, since so-called rights must be traded off against each other, debasing legitimate rights which, by definition, trump other moral considerations. Any alleged "right" which calls for an end result (e.g. housing, education, medical services) produced by people is, in effect, a purported "right" to enslave others.[citation needed]

Some notable philosophers, such as Alasdair MacIntyre, have theorized freedom in terms of our social interdependence with other people.[11]

American Economist Milton Friedman in his book Capitalism and Freedom argues that there are two types of freedom: political freedom and Economic freedom. Friedman asserted that without economic freedom, there cannot be political freedom. This idea was contested by Robin Hahnel in his article "Why the Market Subverts Democracy." Robin Hahnel points out a set of issues with Friedmans understanding of economic freedom: that there will in fact be infringements on the freedom of others whenever anyone exercises their own economic freedom, and that such infringements can only be avoided if there is a precisely defined property rights systemwhich Friedman fails to provide or specify directly. [12] [13]

According to political philosopher Nikolas Kompridis, the pursuit of freedom in the modern era can be broadly divided into two motivating ideals: freedom as autonomy or independence; and freedom as the ability to cooperatively initiate a new beginning.[14]

Political freedom has also been theorized in its opposition to (and a condition of) "power relations", or the power of "action upon actions," by Michel Foucault.[15] It has also been closely identified with certain kinds of artistic and cultural practice by Cornelius Castoriadis, Antonio Gramsci, Herbert Marcuse, Jacques Rancire, and Theodor Adorno.

Environmentalists often argue that political freedoms should include some constraint on use of ecosystems. They maintain there is no such thing, for instance, as "freedom to pollute" or "freedom to deforest" given that such activities create negative externalities, which violates other groups' liberty to not be exposed to pollution. The popularity of SUVs, golf, and urban sprawl has been used as evidence that some ideas of freedom and ecological conservation can clash. This leads at times to serious confrontations and clashes of values reflected in advertising campaigns, e.g. that of PETA regarding fur.

John Dalberg-Acton stated that "The most certain test by which we judge whether a country is really free is the amount of security enjoyed by minorities."[16]

Gerald MacCallum spoke of a compromise between positive and negative freedoms. An agent must have full autonomy over themselves. It is triadic in relation to each other, because it is about three things: the agent, the constraints they need to be free from, and the goal they're aspiring to.[17]

Hannah Arendt traces freedom's conceptual origins to ancient Greek politics.[1] According to her study, the concept of freedom was historically inseparable from political action. Politics could only be practiced by those who had freed themselves from the necessities of life, so that they could participate in the realm of political affairs. According to Arendt, the concept of freedom became associated with the Christian notion of freedom of the will, or inner freedom, around the 5th century CE and since then, freedom as a form of political action has been neglected, even though, as she says, freedom is "the raison d'tre of politics."[18]

Arendt says that political freedom is historically opposed to sovereignty or will-power, since in ancient Greece and Rome, the concept of freedom was inseparable from performance, and did not arise as a conflict between the "will" and the "self." Similarly, the idea of freedom as freedom from politics is a notion that developed in modern times. This is opposed to the idea of freedom as the capacity to "begin anew," which Arendt sees as a corollary to the innate human condition of natality, or our nature as "new beginnings and hence beginners."[19]

In Arendt's view, political action is an interruption of automatic process, either natural or historical. The freedom to begin anew is thus an extension of "the freedom to call something into being which did not exist before, which was not given, not even as an object of cognition or imagination, and which therefore, strictly speaking, could not be known."[20]

Read the original post:

Political freedom - Wikipedia

TRANCE Formation Of America

You may order this book from our ebay Store!

25 year veteran US Government Whistleblowers Mark and Cathy are arming you with the self-applying concise facts they teach leading mental health professionals worldwide. Whether your traumatic experience peaks the top of PTSDs sliding scale the way Cathys Pentagon level MK Ultra mind control programming did; or is from the horrors of war; or even if it is simply resultant from socially engineered information control and fears, this book is for you.These step by step healing methods intelligence insider Mark Phillips taught Cathy can help anyone willing to reclaim control over their own mind and life just as she did.

Their journey to releasePTSD: Time to Heal has been politicallystrenuous until now! Positive change through public awareness and overwhelming global demand prompted this release of otherwise suppressed easy to follow step-by-step healing methods Mark taught Cathy for successfully reclaiming her mind and life after decades of torturous MK Ultra mind control.

Since PTSD: Time to Heal is a workbook journal, it is not authorized and licensed in eBook form. Get your printed copy here. From its cover to unconventional layout to insights within, PTSD: Time to Heal reverberates with introspective inspirations.

Read the rest here:

TRANCE Formation Of America

PD Neurotechnology – Official Site

PDMonitor is a non-invasive continuous monitoring system for use from patients with Parkinsons disease.

The system is composed of a set of wearable Monitoring Devices, a mobile application, which enables patients/caregivers to record medication, nutrition and non-motor status information as complementary information for the motor symptom assessment, and a physician tool, which graphically presents to the healthcare professional all patient related information.

It is intended to trace record, process and store a variety of motor and non-motor symptoms frequently presented in Parkinsons disease, by the continuous use of a set of wearable monitoring devices.

The system can be used at any stage of the disease after its initial diagnosis and when the patients are under medical treatment.

Movement information derived from the recordings and disease symptoms with their intensities, after appropriate data processing, is presented to the treating healthcare professional in a comprehensive way. The reports will be at the disposal and judgment of the attending healthcare professional and could allow for a better and objective assessment and understanding of the patients symptom condition related to the Parkinsons Disease through the Physician Tool.

The motor symptom information is accompanied by other data, collected through the smartphone of the patient/caregiver related to the patient lifestyle, cognitive condition, diet, activity, etc. (i.e. non-motor symptoms). The system can provide a picture of the patient health status to the healthcare professional, along with detailed information in various time periods and a friendly environment for the healthcare professional to make a change in the patients therapeutic plan, which can be communicated through the PDMonitor system to the patient/caregiver.

Three actors compose the PDMonitor ecosystem: (a) patients being at any stage of the disease, (b) caregivers formal (nurses, volunteers) or informal (relatives, family, volunteers) appointed for specific patients, (c) healthcare professionals (medical doctors Neurologists experts in movement disorders or Neurologists or General Practitioners.

The system provides a closed loop of interaction among the patient, the caregiver and the medical doctor and at the same time provide a repository of most of the patient health status related data.

The patient can acquire the system from PD Neurotechnology Ltd and he could be paired to a healthcare professional who provides the followup in the monitoring data and the communication with the patient / caregiver through the system. The patient could use the system only after having the agreement with the Healthcare Professional who is obliged to be trained, and registered to PDNeurotechnology Ltd in the use of the PDMonitor system (certified by PD Neurotechnology Ltd. as PDMonitor Healthcare Professional). Both, the patient and the Healthcare Professional are registered users of the PDMonitor System.

Access to the system has any patient suffering from Parkinsons disease, who is treated by a healthcare professional.

Throughout the year, the patient is able to follow his or her progress, through the mobile application, perform simple tasks through the specific section of the application and interact with the doctor and the caregiver in a simple, coherent and value adding manner.

See the original post:

PD Neurotechnology - Official Site

The Ron Paul Institute for Peace and Prosperity : Cant We …

Assad was supposed to be gone already. President Obama thought it would be just another regime change operation and perhaps Assad would end up like Saddam Hussein or Yanukovych. Or maybe even Gaddafi. But he was supposed to be gone. The US spent billions to get rid of him and even provided weapons and training to the kinds of radicals that attacked the United States on 9/11.

But with the help of his allies, Assad has nearly defeated this foreign-sponsored insurgency.

The US fought him every step of the way. Each time the Syrian military approached another occupied city or province, Washington and its obedient allies issued the usual warnings that Assad was not liberating territory but was actually seeking to kill more of his own people.

Remember Aleppo, where the US claimed Assad was planning mass slaughter once he regained control? As usual the neocons and the media were completely wrong. Even the UN has admitted that with Aleppo back in the hands of the Syrian government hundreds of thousands of Syrians have actually moved back. We are supposed to believe they willingly returned so that Assad could kill them?

The truth is Aleppo is being rebuilt. Christians celebrated Easter there this spring for the first time in years. There has been no slaughter once al-Qaeda and ISIS hold was broken. Believe me, if there was a slaughter we would have heard about it in the media!

So now, with the Syrian military and its allies prepare to liberate the final Syrian province of Idlib, Secretary of State Mike Pompeo again warns the Syrian government against re-taking its own territory. He Tweeted on Friday that: The three million Syrians, who have already been forced out of their homes and are now in Idlib, will suffer from this aggression. Not good. The world is watching.

President Trumps National Security Advisor, John Bolton, has also warned the Syrian government that the US will attack if it uses gas in Idlib. Of course, that warning serves as an open invitation to rebels currently holding Idlib to set off another false flag and enjoy US air support.

Bolton and Pompeo are painting Idlib as a peaceful province resisting the violence of an Assad who they claim just enjoys killing his own people. But who controls Idlib province? President Trumps own Special Envoy for the Global Coalition to Counter ISIS, Brett McGurk, said in Washington just last year that, Idlib province is the largest al-Qaeda safe-haven since 9/11, tied to directly to Ayman al Zawahiri, this is a huge problem.

Could someone please remind Pompeo and Bolton that al-Qaeda are the bad guys?

After six years of a foreign-backed regime-change operation in Syria, where hundreds of thousands have been killed and the country nearly fell into the hands of ISIS and al-Qaeda, the Syrian government is on the verge of victory. Assad is hardly a saint, but does anyone really think al-Qaeda and ISIS are preferable? After all, how many Syrians fled the country when Assad was in charge versus when the US-backed rebels started taking over?

Americans should be outraged that Pompeo and Bolton are defending al-Qaeda in Idlib. Its time for the neocons to admit they lost. It is time to give Syria back to the Syrians. It is time to pull the US troops from Syria. It is time to just leave Syria alone!

Excerpt from:

The Ron Paul Institute for Peace and Prosperity : Cant We ...

Ron Paul: The Mueller Indictments & The Triumph Of The Deep …

Authored by Ron Paul via The Ron Paul Institute for Peace & Prosperity,

The term deep state has been so over-used in the past few years that it may seem meaningless. It has become standard practice to label ones political adversaries as representing the deep state as a way of avoiding the defense of ones positions. President Trump has often blamed the deep state for his political troubles. Trump supporters have created big conspiracies involving the deep state to explain why the president places neocons in key positions or fails to fulfill his campaign promises.

But the deep state is no vast and secret conspiracy theory. The deep state is real, it operates out in the open, and it is far from monolithic. The deep state is simply the permanent, unelected government that continues to expand its power regardless of how Americans vote.

There are factions of the deep state that are pleased with President Trumps policies, and in fact we might say that President Trump represents some factions of the deep state.

Other factions of the deep state are determined to undermine any of President Trumps actions they perceive as threatening. Any move toward peace with Russia is surely something they feel to be threatening. There are hundreds of billions of reasons otherwise known as dollars why the Beltway military-industrial complex is terrified of peace breaking out with Russia and will do whatever it takes to prevent that from happening.

That is why Deputy Attorney General Rod Rosensteins indictment on Friday of 12 Russian military intelligence officers for allegedly interfering in the 2016 US presidential election should immediately raise some very serious questions.

First the obvious: after more than a year of investigations which have publicly revealed zero collusion between the Trump campaign and Russia, why drop this bombshell of an allegation at the end of the news cycle on the last business day before the historic Trump/Putin meeting in Helsinki? The indictment could not have been announced a month ago or in two weeks? Is it not suspicious that now no one is talking about reducing tensions with Russia but is all of a sudden thanks to Special Counsel Robert Mueller talking about increasing tensions?

Unfortunately most Americans don't seem to understand that indictments are not evidence. In fact they are often evidence-free, as is this indictment.

Did the Russian government seek to interfere in the 2016 US presidential elections? Its certainly possible, however we dont know. None of the Justice Departments assertions have been tested in a court of law, as is thankfully required by our legal system. It is not enough to make an allegation, as Mueller has done. You have to prove it.

That is why we should be very suspicious of these new indictments. Mueller knows he will never have to defend his assertions in a court of law so he can make any allegation he wants.

It is interesting that one of the Russian companies indicted by Mueller earlier this year surprised the world by actually entering a not guilty plea and demanding to see Muellers evidence. The Special Counsel proceeded to file several motions to delay the hand-over of his evidence. What does Mueller have to hide?

Meanwhile, why is no one talking about the estimated 100 elections the US government has meddled in since World War II? Maybe we need to get our own house in order?

Follow this link:

Ron Paul: The Mueller Indictments & The Triumph Of The Deep ...

Webb vs Hubble Telescope – Webb/NASA

+

Webb often gets called the replacement for Hubble, but we prefer to call it a successor. After all, Webb is the scientific successor to Hubble; its science goals were motivated by results from Hubble. Hubble's science pushed us to look to longer wavelengths to "go beyond" what Hubble has already done. In particular, more distant objects are more highly redshifted, and their light is pushed from the UV and optical into the near-infrared. Thus observations of these distant objects (like the first galaxies formed in the Universe, for example) requires an infrared telescope.

This is the other reason that Webb is not a replacement for Hubble is that its capabilities are not identical. Webb will primarily look at the Universe in the infrared, while Hubble studies it primarily at optical and ultraviolet wavelengths (though it has some infrared capability). Webb also has a much bigger mirror than Hubble. This larger light collecting area means that Webb can peer farther back into time than Hubble is capable of doing. Hubble is in a very close orbit around the earth, while Webb will be 1.5 million kilometers (km) away at the second Lagrange(L2) point.

Read on to explore some of the details of what these differences mean.

+

Webb will observe primarily in the infrared and will have four science instruments to capture images and spectra of astronomical objects. These instruments will provide wavelength coverage from 0.6 to 28 micrometers (or "microns"; 1 micron is 1.0 x 10-6 meters). The infrared part of the electromagnetic spectrum goes from about 0.75 microns to a few hundredmicrons. This means that Webb's instruments will work primarily in theinfrared range of the electromagnetic spectrum, with some capability inthe visible range (in particular in the red and up to the yellow part of the visible spectrum).

The instruments on Hubble can observe a small portion of the infrared spectrum from 0.8 to 2.5 microns, but its primary capabilities are in the ultra-violet and visible parts of the spectrum from 0.1 to 0.8 microns.

+

Why are infrared observations important to astronomy? Stars and planets that are just forming lie hidden behind cocoons of dust that absorb visible light. (The same is true for the very center of our galaxy.) However, infrared light emitted by these regions can penetrate this dusty shroud and reveal what is inside.

At left are infrared and visible light images from the Hubble Space Telescope of the Monkey Head Nebula, a star-froming region. A jet of material from a newly forming star is visible in one of the pillars, just above and left of centre in the right-hand image. Several galaxies are seen in the infrared view, much more distant than the columns of dust and gas.

[top]

+

The Earth is 150 million km from the Sun and the moon orbits the earth at a distance of approximately 384,500 km.The Hubble Space Telescope orbits around the Earth at an altitude of ~570 km above it. Webb will not actually orbit the Earth - instead it will sit at the Earth-Sun L2 Lagrange point, 1.5 million km away!

Because Hubble is in Earth orbit, it was able to be launched into space by the space shuttle. Webb will be launched on an Ariane 5 rocket and because it won't be in Earth orbit, it is not designed to be serviced by the space shuttle.

+

At the L2 point Webb's solar shield will block the light from the Sun, Earth, and Moon. This will help Webb stay cool, which is very important for an infrared telescope.

As the Earth orbits the Sun, Webb will orbit with it - but stay fixed in the same spot with relation to the Earth and the Sun, as shown in the diagram to the left. Actually, satellites orbit around the L2 point, as you can see in the diagram - they don't stay completely motionless at a fixed spot.

+

Because of the time it takes light to travel, the further away an object is, the further back in time we are looking.

This illustration compares various telescopes and how far back they are able to see. Essentially, Hubble can see the equivalent of "toddler galaxies" and Webb Telescope will be able see "baby galaxies".One reason Webb will be able to see the first galaxies is because it is an infrared telescope. The universe (and thus the galaxies in it) is expanding. When we talk about the most distant objects, Einsteins General Relatively actually comes into play. It tells us that the expansion of the universe means it is the space between objects that actually stretches, causing objects (galaxies) to move away from each other. Furthermore, any light in that space will also stretch, shifting that light's wavelength to longer wavelengths. This can make distant objects very dim (or invisible) at visible wavelengths of light, because that light reaches us as infrared light. Infrared telescopes, like Webb, are ideal for observing these early galaxies.

[top]

+

The Herschel Space Observatory was an infrared telescope built by the European Space Agency - it too orbited the L2 point (where Webb will be).

The primary difference between Webb and Herschel is wavelength range: Webb goes from 0.6 to 28.5 microns; Herschel went from 60 to 500 microns. Webb is also larger, with a 6.5 meter mirror vs. Herschel's 3.5 meters.

The wavelength ranges were chosen by different science: Herschel looked for the extremes, the most actively star-forming galaxies, which emit most of their energy in the far-IR. Webb will find the first galaxies to form in the early universe, for which it needs extreme sensitivity in the near-IR.

At right is an infrared image of the Andromeda Galaxy (M31) taken by Herschel (orange) with an X-ray image from XMM-Newton superposed over it (blue).

[top]

Continued here:

Webb vs Hubble Telescope - Webb/NASA

What is cyberpunk? – Polygon

A woman doing her makeup as the camera slowly pulls out to reveal shes missing the bottom half of her face, a gaping cybernetic maw in its place. A cable jacked directly into a businessmans skull, sparking and smoking as it fries his brain. An elevator the size of an apartment, crawling up the side of a high-rise towards the sky.

These are just some of the fragmented vignettes studio CD Projekt Red put on display in Cyberpunk 2077s debut trailer earlier this year. As an introduction to Night City, it promised one of the most distinctive game settings since Rapture or City 17 but not much of its neon-soaked imagery is original. And thats by design.

With this game, CD Projekt Red is drawing from a long tradition, one that unusually is named right there in the title: cyberpunk. But what exactly does that mean, and where did it come from?

You can trace the roots of cyberpunk back through multiple generations, but the first definite milestone in its development was the book Do Androids Dream of Electric Sheep? Philip K. Dicks 1968 novel introduced the world to Rick Deckard, a bounty hunter tracking down a gang of escaped androids trying to pass as humans.

If that sounds familiar, its because the book was adapted, over a decade later, into Blade Runner.

The movies night-drenched cityscape, lit by plumes of flame from industrial towers and skyscraping video billboards, set the visual template for most cyberpunk going forward.

But that world didnt come from Dicks novel as much as it did The Long Tomorrow, a 1975 comic by French artist Moebius and screenwriter Dan OBannon. Moebius conception of a grimy future city, where tightly-crammed tower blocks form a deep chasm around its inhabitants, inspired not only director Ridley Scott, but also Katsuhiro Otomo whose Akira manga began publication in Japan in 1982, the same year as Blade Runners release and an American-Canadian novelist named William Gibson. But well get back to him.

Between Do Androids Dream of Electric Sheep?s questions about who counts as human in an age of androids, The Long Tomorrows blending of film-noir tropes and science fiction, and Blade Runners rain-slick realization of the city of the future, all the vital ingredients were in place for cyberpunk by the early 1980s. All it needed was a name.

Understand, when I came up with the c-word in 1980, all I was trying to do was come up with a snappy, one-word title for this story, says Bruce Bethke, the writer who first coined cyberpunk, referring to an article he wrote about teenage hackers. I was not trying to define a genre, launch a movement or do anything more than come up with a memorable one-word marketing label for this story that would I hoped compress the core idea down to a few syllables and this was the important part stick in an editors mind and help me sell the thing to a magazine.

Apparently I overdid it.

When the story published in 1983, Bethke unintentionally and accidentally, in his words tapped into something larger. The title was adopted as the name of a loose genre that was beginning to form, just in time for the arrival of what many feel has been its definitive work: Neuromancer, by William Gibson.

The 1984 novel tells the story of console cowboy Case, a hacker who crosses his criminal bosses and, as payback, gets his central nervous system trashed so badly he can no longer access the cyberspace matrix. At the start of the book, hes offered a second chance by a mysterious new employer. Case can be fixed, but only if he agrees to help with a string of heists which take him from Japans Night City to the American Sprawl and eventually an orbital space station, in order to free a super-advanced AI.

The story blended crime and science fiction but, like Blade Runner before it, what really struck a chord was the world that Gibson laid out.

Neuromancers vision of the future can be divided in two. Between the grubby, crime-filled meatspace and the bright glare of cyberspace. Between the people on the streets struggling to survive, and the aristocrats orbiting the planet, struggling to find ways to fill their artificially-extended lifespans. Between the aging remnants of our world early in the book, Case buys a fifty-year-old Vietnamese imitation of a South American copy of a Walther PPK and cutting-edge technology that lets people augment their bodies with new limbs, eyes, skin as long as they can afford the bill.

Neuromancer marked out the boundaries of the genre, boundaries which were explored and cemented by the books that followed. Pat Cadigans Mindplayers and Synners focused on the psychological implications of brain modification technology. Rudy Ruckers Ware series followed Neuromancers thread of self-aware AI through to its logical conclusion, showing how the resulting mechanical lifeforms evolve through successive generations. Bruce Sterlings work, like Islands in the Net, was especially interested in the hacker subculture.

Sterling was something of a figurehead in the cyberpunk scene, earning the nickname Chairman Bruce. He edited 1986s Mirrorshades: The Cyberpunk Anthology, a deliberately definitive collection of short stories that included work by Gibson, Cadigan and Rucker. In the preface to that book, Sterling wrote:

Certain central themes spring up repeatedly in cyberpunk. The theme of body invasion: prosthetic limbs, implanted circuitry, cosmetic surgery, genetic alteration. The even more powerful theme of mind invasion: brain-computer interfaces, artificial intelligence, neurochemistry techniques radically redefining the nature of humanity, the nature of the self.

Cyberpunk meshes these advanced technologies with more down-to-earth concerns like drugs, dive bars and desperation that turn people to crime. The ruling powers of cyberpunk worlds are almost always immense corporations who control access to technology. The protagonists tend to be outsiders criminals and noir-style antiheroes who exist on the margins of society. Theres an oft-quoted maxim by Sterling that sums it up nicely: Lowlife and high-tech.

In 1988, cyberpunk made its first move to the tabletop with the release of Cyberpunk, a pen-and-paper roleplaying game written by Mike Pondsmith. It shares more than a name with Cyberpunk 2077 CD Projekt Reds video game is a direct adaptation, moving away from a group of people sitting around a table to a first-person, open world, single-player experience but keeping its world, character classes and the input of its creator.

Pondsmith tells Polygon how he personally defines cyberpunk as a genre: Street-level life crushed under overwhelming political and social forces, but which uses a combination of found/scavenged/repurposed technology to fight back and achieve personal freedom.

Pondsmith hadnt actually read the likes of Gibson and Sterling when he wrote the first iteration of Cyberpunk, he says, but he started to incorporate their ideas into the RPGs second edition, known as Cyberpunk 2020.

The once-molten form of cyberpunk was beginning to cool into something more solid. And for a genre where one of the key tenets is to quote the rulebook of Cyberpunk 2020 break the rules, thats not necessarily a good thing.

What happened to cyberpunk fiction was what happens to every successful new thing in any branch of pop culture, says Bruce Bethke. It went from being something unexpected, fresh and original to being a trendy fashion statement, to being the flavor of the month, to being a repeatable commercial formula, to being a hoary trope.

The motifs of Gibsons Neuromancer turned into a kind of checklist. Stories of alienated loners in mirrored shades doing drugs and hacking computers quickly became the norm. So much so that in the early 90s, some of the most prominent cyberpunk books were those which pushed the formula to satirical extremes.

The opening chapter of Neal Stephensons book Snow Crash introduces the improbably-named Hiro Protagonist, a Deliverator armed with twin samurai swords, driving a car with enough potential energy packed into its batteries to fire a pound of bacon into the Asteroid Belt before revealing that hes actually a pizza delivery boy.

In his 1995 novel Headcrash, Bethke even skewered the genre hed helped christen, writing: Theyre total wankers and losers who indulge in Messianic fantasies about someday getting even with the world through almost-magical computer skills, but whose actual use of the Net amounts to dialing up the scatophilia forum and downloading a few disgusting pictures. You know, cyberpunks.

It looked like cyberpunk might have run its course as early as 1993, a Wired magazine headline proclaimed Cyberpunk R.I.P. but what followed, as the millennium raced to its conclusion, made for possibly the genres biggest moment in the spotlight. Its influence leaked outward, and the genre mutated in a dozen different directions as it entered the mainstream.

A large part of this came via Japan, as Akira inspired a wave of cyberpunk-infused manga and anime, including Battle Angel Alita, Serial Experiments Lain, Cowboy Bebop and, perhaps most famously, Ghost in the Shell, which in turn inspired the Wachowskis to make The Matrix. Meanwhile in games, Deus Ex laid the foundations that CD Projekt Red seems to be building on with Cyberpunk 2077. And Hideo Kojima, who had created the cyberpunk game Snatcher a decade earlier, took elements like cybernetics and artificial intelligence and applied them to the hugely successful spy game Metal Gear Solid.

How many of the above examples are true cyberpunk, however, is debatable. They certainly share some of the genres technological aesthetic (think of Keanu pulling a thick cable from his spine) and dress sense (all those mirrored sunglasses), but they dont all share the same thematic concerns.

Depending on who you listen to, cyberpunk became a case of to borrow another of the mantras from 2020s rulebook style over substance. Its a criticism that Gibson himself echoed back in June, when he tweeted, The trailer for Cyberpunk 2077 strikes me as GTA skinned-over with a generic 80s retro-future, but hey, thats just me.

Ultimately, though, cyberpunk has survived beyond its 80s roots because its appeal runs deeper than the surface layer of leather, chrome and neon. There is a clear focus on style, but its born out of an understanding that the way people present themselves can tell you just as much about the culture they exist in as an expository info-dump.

The great thing about cyberpunk is that it is recognizably our world, only in the future, says Lukas Litzsinger, the game designer responsible for the 2012 revival of cyberpunk card game Netrunner.

Cyberpunk authors like Gibson and Neal Stephenson predicted how technology would develop, and occasionally helped shape it their books helped popularize terms like cyberspace, virus and avatar, and Stephensons conception of the Metaverse has been claimed as an influence on everything from Google Earth to Xbox Live but this isnt the most important thing about cyberpunks vision of the future.

It is a setting that is focused on the human experience, and how far we can push the limits of both technology and ourselves, says Litzsinger.

The writers who laid the foundation of cyberpunk looked at the accelerating pace of change in the late 20th century, and understood that technology would forever be an inseparable part of the human experience. This is still what makes the genre stand apart from other branches of sci-fi: the way it considers the social impact of technology on everyday life.

In Neuromancer, having a health-monitoring implant is just another reason you might get mugged if you step into the wrong part of town. In Netrunner, body augmentation is something a corporation can force on its employees to improve performance.

For myself, the most interesting cyberpunk focuses on what it means to be human in a world that wishes to convert you into a corporate asset, says Ashley Yawns, writer at nerd-culture site Timber Owls. Yawns was a prominent voice in the Twitter discussions following Cyberpunk 2077s E3 reveal, weighing up the games politics and in particular its presentation of body augmentation, a key component of cyberpunk.

Body modification is a great avenue for empowering stories for groups routinely denied bodily autonomy: disabled people, trans people, women as a whole, etc., says Yawns. The problem is that utopianism clashes with the impoverished lives cyberpunk depicts, immediately raising the question of who can afford these freedoms.

Enabling bodily autonomy, alteration and restored function is a great thing but as things stand, access for the majority means debt or servitude to malicious corporate monopolies, says Yawns. Anyone whos experienced tech industry practices of planned obsolescence and covert data collection on their phone can imagine what these companies might do given access your cybernetic limbs, let alone your whole nervous system.

Liberating tech is often made into a yoke by its social context.

That last part is the biomechanically-enhanced heart of cyberpunk. William Gibson has often summed it up in interviews: the future is already here its just not very evenly distributed. Cyberpunk worlds are about the gap between those who have access to their futuristic technologies and those who dont a gap thats often expressed literally, in the verticality of its mega-cities.

Even as the future that writers like Gibson predicted starts to look increasingly out of date technologically speaking, its this core message which keeps the genre relevant.

I personally think that any cyberpunk work worthy of the name needs to show that dehumanizing, unequal relationship of power and politics as part of its makeup, says Pondsmith. You dont raise hell in a future where things are a Star Trekkian Utopia you raise hell when all the forces in power are arrayed against you personally, and you have to fight back.

This is an element of Pondsmiths game that CD Projekt Red seems to be adapting faithfully. Cyberpunk 2077 is about a world where a vanishingly small number of ultra-rich individuals at the top of intractable corporate power structures reign over a disintegrating world where the vast majority of the population lives in an endless cycle of poverty and violence, quest designer Patrick Mills recently told Official Xbox Magazine. How different that is from our world depends a lot on your perspective, I suppose.

In a time when developers and publishers are insisting, against all evidence, that their games have no political message, its a stark contrast to see Mills saying: Cyberpunk is an inherently political genre and its an inherently political franchise.

Litzsinger agrees: To me, cyberpunk does feel inherently political in that its protagonists almost always operate on the fringes of the law, whether because of criminal activity or the inability for the law to keep up with technology. It can challenge us to think about the difference between something that is legal and something that is moral, and you will find a common thread of rebellion against the system in a lot of cyberpunk narratives.

Litzsingers Netrunner which is ending later this year is one of the most politically aware incarnations of cyberpunk ever to exist. That might be surprising, given its a card game, but the theming and flavor text of Netrunners thousand-plus cards have given it plenty of room to flesh out its setting and worldview.

Whereas many cyberpunk works have stuck to the narrow focus on U.S., Japan and China that was established in Neuromancer, the world of Netrunner has covered ... well, the entire world. Its made Ecuador the global centre of commerce, and has dedicated entire cycles of card packs to exploring India and Sub-Saharan Africa. Its roster of playable characters has retained a 50:50 balance between male and female, as well as including non-binary and transgender characters, and put Asian people in lead roles rather than just being an invisible part of the world, as in many western cyberpunk stories.

Representation is one area where Cyberpunk 2077 has run into a bit of controversy. The games debut trailer featured just one Indian character, in the stereotypical role of taxi driver. The Twitter account of Dear Esther developer The Chinese Room accused the games marketing of presenting women in a sexist manner, and CD Projekt Red recently tweeted a transphobic joke from its own Twitter account.

With nearly an hour to show us its world, a recently-released gameplay demo fares a little better, especially as the player can choose Vs gender and race but its far from perfect. The majority of characters players meet are white, with the exception of black crime boss Dexter DeShawn and sweary Latino sidekick Jackie Welles, the latter having received criticism for stereotypical dialogue that drops random nuggets of Spanish into English sentences. And, in light CD Projekt Reds long-debated attitude to female nudity, opening with a quest that involves an unconscious naked woman also invites questions.

Broadly, though, CD Projekt Red seems keen to stick to Pondsmiths original vision, which addressed everything from gentrification to corporate security forces. The developers frame-by-frame, trailer-breakdown blogs show its thinking about topics like overexposure to advertising, gun laws and unevenly distributed technological wealth as part of its world building.

Cyberpunk, and science fiction in general, can take ideas from the grey of modern life and turn up the contrast. The for-profit medicine system becomes 2077s Trauma Team, a vital part of the gameplay demos first quest equal parts paramedic and paramilitary, ready to kill in order to save the lives of paying customers.

Thats far from subtle, but these exaggerated futures can provide a helpful filter for examining our current political situation. Head to the cyberpunk subreddit and, as well as a wealth of fanart, youll find people sharing the latest incursions of cyberpunk into our reality, whether its police in AR headsets or a woman charging her bionic arm on the train.

Amazon and Walmart tracking their employees movements and conversations to determine performance of employees, dead celebrities being resurrected as holograms or CGI constructions, the rise of using crowdfunding platforms to fund life-saving surgeries ... all of these things have precedents in cyberpunk.

With the genre currently enjoying another pop cultural boom in recent years, weve had a sequel to Blade Runner and a live-action remake of Ghost in the Shell; Altered Carbon on Netflix; and Observer, EXAPunks and new instalments of Deus Ex on computers and consoles Cyberpunk 2077 looks likely to stand apart precisely because it isnt shying away from these political ideas.

As Pondsmith puts it: Cyberpunk is all about inequity and the threat of a future in which opportunity is unfairly distributed. Its about how the forces of big money and big government conspire to keep everyday citizens under control, and how those same citizens use unorthodox means to defeat that agenda.

Hell yes, its political now more than ever.

Read more here:

What is cyberpunk? - Polygon

New eugenics – Wikipedia

New eugenics, also known as neo-eugenics, consumer eugenics, liberal eugenics, and libertarian eugenics, is an ideology that advocates the use of reproductive and genetic technologies where the choice of enhancing human characteristics and capacities is left to the individual preferences of parents acting as consumers, rather than the public health policies of the state. The term "liberal eugenics" was coined by bioethicist Nicholas Agar.[1] Since around the year 2000, criticism has risen preferring to call the theory "libertarian eugenics" because of its intention to keep the role of the state minimal in the advocated eugenics program.[2]

The term refers to an ideology of eugenics influenced by liberal theory and contrasted from the coercive state eugenics programs of the first half of the 20th century.[3] The sterilization of individuals alleged to have undesirable genes is the most controversial aspect of those programs.[1]

Historically, eugenics is often broken into the categories of positive (encouraging reproduction among the designated "fit") and negative (discouraging reproduction among the designated "unfit"). According to Edwin Black, many positive eugenic programs were advocated and pursued during the early 20th century, but the negative programs were responsible for the compulsory sterilization of hundreds of thousands of persons in many countries, and were contained in much of the rhetoric of Nazi eugenic policies of racial hygiene and genocide.[4] New eugenics belongs to the "positive eugenics" category allowing parents to select desirable traits in an unborn child.[5]

Dov Fox, a law professor at the University of San Diego, argues that liberal eugenics cannot be justified on the basis of the underlying liberal theory which inspires it. He introduces an alternative to John Rawls's social primary goods that might be called natural primary goods: heritable mental and physical capacities and dispositions that are valued across a range of projects and pursuits. He suggests that reprogenetic technologies like embryo selection, cellular surgery, and human genetic engineering, which aim to enhance "general purpose" traits in offspring are less like childrearing practices a liberal government leaves to the discretion of parents than like practices the state makes compulsory.[6]

Fox argues that if the liberal commitment to autonomy is important enough for the state to mandate childrearing practices such as health care and basic education, that very same interest is important enough for the state to mandate safe, effective, and functionally integrated genetic practices that act on analogous all-purpose traits such as resistance to disease and general cognitive functioning. He concludes that the liberal case for compulsory eugenics is a reductio ad absurdum against liberal theory.[6]

According to health care public policy analyst RJ Eskow, "libertarian eugenics" is the term that would more accurately describe the form of eugenics promoted by some notable proponents of liberal eugenics, in light of their strong opposition to even minimal state intervention in eugenic family planning, which would be expected of a social liberal state that assumes some responsibility for the welfare of its future citizens.[2]

The United Nations International Bioethics Committee wrote that liberal eugenics should not be confused with the ethical problems of the 20th century eugenics movements, but that it is still problematic because it challenges the idea of human equality and opens up new ways of discrimination and stigmatization against those who do not want or cannot afford the enhancements.[7]

Liberal eugenics is known as new eugenics, consumer eugenics, reprogenetics, or designer progeny. The connotations of liberal eugenics are negative due to the history of eugenics being associated with dark historical times. According to the Harvard Law Review, the eugenics of the early 20th century were part of a false scientific justification for racism, class-ism, and colonial subjugation falsely concerned with genetic fitness. The new model of eugenics of the 21st century, called liberal eugenics, allegedly advocates for genetic modification including the screening of genes that cause serious disabilities and engineering children to be born with more desirable physical and mental traits.[8] Liberal eugenics is aimed at "improving" the genotypes of future generations through screening and genetic modification to eliminate "undesirable" traits.

Read more from the original source:

New eugenics - Wikipedia

Pantheism | Inters.org

The word pantheism comes from two Greek words: pn, which means all and thos, which means god. Hence it is used to classify doctrines according to which all that is, is God, or which identify God and the world in various ways. J. Fay was the first to use the term pantheism, in his work entitled Defensio religionis (1709). In an effort to defend theism, he criticized the theoretical positions of J. Toland, who in the work, Socinianism Truly Stated (1705), defined himself as a pantheist, and who also later entitled his last work, Pantheisticon (1720).

The doctrine of pantheism, however, is much older than its name. Due to this long history, one must distinguish its various expressions throughout the ages. The first meaning of pantheism refers to transcendental pantheism, i.e., the very general idea that the world is a mere manifestation of God. This form of pantheism sees the divine only deep within things, and in particular in the soul. As a result, the creature becomes God only insofar as it liberates itself from the material shell of sensibility. This view dates all the way back to the Vedanta doctrines of India and found its highest expression in western Neo-platonism. The second meaning of pantheism is atheistic or immanent pantheism, or monism (see the article Atheism). It considers the divine as a vital energy animating the world from within, and thus has naturalistic and materialistic consequences. Finally, pantheism also assumes the meaning of a transcendental-immanent pantheism, according to which God not only reveals himself, but also realizes himself in all things. Such is the pantheism of Spinoza, for example, and that which, in diverse forms, is of interest to various idealistic currents of the modern age.

Here, the classical roots of pantheism will be introduced first, followed by a number of illustrative examples from the Renaissance and Modern Age. Next, I will indicate those conceptions of nature, implicitly favored by contemporary scientific thought or at times explicitly conveyed by it, that seem to maintain a certain relation with the pantheistic vision. Finally, the perspectives of Christian Revelation and of the Magisterium of the Church will be briefly outlined.

1. Archaic Eastern Conceptions and the Buddhist Perspective. It is possible to see in many Eastern religions and philosophies the original seed that gave rise to the pantheistic vision of the universe. Indeed, in many of these systems of thought, the idea of a personal God does not exist: God is understood as the sole existing reality and the world as nothing other than an appearance, an image, that in the end must be reabsorbed in some way. This idea is clear in the ancient religion of the Veda, which developed in India between the 15th and 9th centuries B.C., though the entire Vedic period was much more extended. Our knowledge of this religion comes to us from the Rig-Veda (Veda in Sanskrit means knowledge), a collection of ten poetic books. The Hindu religion, which derives from the Rig-Veda, cannot exactly be defined as a polytheistic religion, even less as a religion in the more literal sense of the word. In the Hindu or Vedic form of religiosity, the believer, although fully accepting and firmly believing in the existence of a supreme and unique Deity, chooses to venerate one of the Deitys particular aspects or his energy or perhaps one of his beneficial manifestations (cf. Poli and Rizzi, 1997, p. 85). In fact, in the Vedic religion, God comes to be identified with a natural object such as the sun, the luminous sky, or the rain. These divine beings have the function of protecting their devotees, but never assume anthropomorphic features. As a result of this, the Vedic-Hindu divinities never arrive at the mythical form, plastic and concrete, of the gods of ancient Greece.

From the Hinduism of the Veda, two diverse currents broke away, which could be considered its heresies: Jainism and Buddhism. Buddhism arose in India in the 6th and 5th centuries B.C., and then, thanks to its missionaries, was diffused throughout the entire Orient. For Buddha, observation of the world reveals that everything is a perpetual flowing: our consciousness only bears witness to us of the flowing of rapidly changing sensations, emotions, and concepts. Having thus denied the existence of the individual soul, the Buddha (whose name means illuminated) holds that it is not possible to know anything other than this perpetual becoming, beyond which it is impossible to find an immutable deity. For Buddhism, reality is reduced to a harmonic correspondence of objective and subjective elements, thus laying the foundations of an a-cosmic pantheism, which will be developed by some of its specific schools. This type of pantheism affirms that nothing exists in itself, but only inasmuch as it is correlated to others. Being alone is conceptual only: every being exists in relation to another. Individuality and singularity are erroneous assumptions. All things are nothing outside the absolute identity, which is void, the inexpressible, the non-conceptual (cf. Puech, 1970-1972). Does such a concept of Buddhism necessarily lead to a negation of the world? It is difficult to respond to this question. However, from an empirical point of view, the world is just as it appears, and beyond this veil of appearances there is the unknown and the unknowable, the void.

Buddhism, on account of its specific structure, is not a religion that may be codified in precise rules. Consequently, a considerable number of Buddhistic schools have formed, practically one for each country where it implanted itself. Interesting for our purposes here is Zen Buddhism, which originated in China and developed in Japan after the 12th century. In fact, not a small number of contemporary scientists seem to intend to refer to it in the construction of their systems and for some of their ideas about natural phenomena (emblematic is the work of F. Capra, The Tao of Physics. An Exploration of the Parallels between Modern Physics and Eastern Mysticism, 1975). Zen means meditation and its practice is an exercise that attempts to liberate the fundamental principle present in each of us from the impure ballast of the passions. The liberation of this principle can come about either through a more profound study of the sacred writings of Buddhism or by means of ascetic or esoteric practices. According to Watts (1959), Zen Buddhism became quite popular in American culture in the 1950s thanks to its philosophical ideas, which inserted themselves perfectly into the climate of philosophical relativism and utilitarian pragmatism.

2. Greek Thought and the Distinction of Beings in Being. The context that gave rise to certain forms of pantheism in ancient Greece was quite different. The Greek epic, which experienced its golden age during the early centuries of the first millennium before Christ and which found in Homer its greatest poet, gave the gods a personal character, unlike the primitive mythologies in which the divinity was essentially tied to natural phenomena. Anthropomorphized by the Greek epic, the gods were more accessible to the existential necessities of human beings, thus favoring the development of a cult civico-religious in character. Beside the Homeric epic, Hesiods Theogonies reconstruct the history of the gods before Zeus, going back even to the very origin of the various divine figures. For Hesiod, all the gods take their form, after various separations and successive generations, from an original Chos, a word meaning above all the immensity of space, the immeasurable, the unlimited. In Greek thought, chos acquired such a broad meaning as to be able easily to approach the philosophical meaning of the All, therefore also bringing to mind a certain idea of pantheism. Theogony slowly lays the foundation for a successive cosmogony, furnishing a series of archaic elements for the speculation about the origin of all things, some of which remain found in the natural philosophy of the Ionian thinkers, for example, Empedocles, who will ultimately hold the interaction of two cosmic forces, love and hate, responsible for the formation of all things.

Greek philosophical thought arose in response to a question that is simultaneously physical and religious and that inquires into totality and multiplicity: what is the origin of all things (Gr. pnta)? And, immediately thereafter: from what are all things made? The solution given by the pre-Socratic thinkers in their search for a unifying principle (arch) will not be exempt from pantheistic currents. These thinkers tended to state that there is one sole substance at the origin of everything, whether it was the water of Thales, the unlimited (peiron) of Anaximander, or the fire of Heraclitus. Certainly, a distinction between the particular beings and Being as such is not missing, as in Parmenides and in Heraclitus, but Being is never taken as something different than individual entities, but is rather their very substance. For example, already for Thales, the observation that all bodies were substantially of water ended in the assertion that all things were actually full of gods. For Parmenides, nature (physis) is Being itself, the All: beyond the All not a thing exists, because the All is being, and beyond being there is nothing. Although the possibility of thinking of something else is not excluded by the thinkers prior to Parmenides nor by the great texts of Eastern wisdom even as they speak of the All and the Totality of things, this was no longer possible after Parmenides. In Parmenidean philosophy, through the use of logic one arrives at the being of All by shaking off all particular beings, which become only appearances. Not knowing how to explain their relation to reality as Being, particular beings end up losing their individuality and autonomy.

Plato and Aristotle will pave the way to secure the separation and consistency of particular beings. Affirming the transcendence of the Forms or Ideas with respect to the material world, and the transcendence of the Good with respect to the Ideas themselves, Plato emphasizes that particular beings are becoming and changeable, whereas the Forms are immutable and eternal. At the summit of all the Ideas, the Idea of the Good significantly also called the One acts upon the unlimited and chaotic multiplicity as a limiting and determining principle. While Parmenides and his followers claim that Being is simultaneously both the Divine that dominates the things and the totality of the things governed, Plato holds instead that there exists a certain separation between the Divine and the things. Thanks to the work of the Demiurge, the primal matter is reduced to order (gr. ksmos) and organized in a space (gr. chra) that becomes a kind of depository, notwithstanding the fact that matter was pre-existent to such an ordering action. But even in Plato, the world forged and diversified by the Demiurge does not cease to be a great living organism endowed with its soul (cf. Timeaus, 30b; also cf. Philebus, 30a-c).

A second way to attribute individual consistency and cohesion to things comes from Aristotle. He derived his result, thanks above all to the principle of the analogy of being and to the distinction between the divine way of being (which is a pure act without any trace of potentiality) and the way of being of particular realities (which instead are a composition of act and potency). Each being possesses a proper essence and a proper metaphysical nature, which guarantee its autonomy and independence and are intrinsic principles of the specificity of its being and operation. Since the beings that manifest themselves to our experience are generally beings subject to becoming, Aristotle asks himself the metaphysical question par excellence: whether beyond them there exist immutable and eternal beings. To this question, both the books of the Physics and of the Metaphysics will give a positive response, to the point of concluding that an Immutable Being must exist, whose supreme life is the knowledge of himself. In other words, God and the world are not the same thing, but from the world one can arrive at knowledge of God.

Thus, a dualistic gnoseological vision of reality is sketched out, in which one recognizes a sensible part and an intelligible one. The distinction, emblematically laid out in the second sailing undertaken by Plato when he broke with the preceding philosophical tradition (cf. Phaedo, 99d-101d), is of interest to Aristotle as well. For Aristotle, the intelligibility of reality is reached by lifting oneself above sensible nature, although this intelligibility does not belong, as Plato proposed, to the world of ideas, but is inherent to the essence of things. Despite this differentiation, pantheistic tendencies will persist in Greek thought, either through the reduction of the immateriality of the intellect or the divine life to the nature of matter itself (materialistic pantheism), or through the spiritualization of the world and its re-absorption in the sphere of the divine (panpsychist pantheism).

3. Plotinian Pantheism. Beyond the pantheism of a materialistic bent professed by the stoic philosophers (see the article Materialism), who held that the divinity consisted of very fine matter animating the great organism of the cosmos (considered to be the body of God), it was the emanatistic pantheism of Plotinus (205-270), the greatest proponent of the so-called Neo-platonism, that in an age already Christian would exercise the greatest influence on later thought. The main elements of Plotinian pantheism are the derivation of the world from the One as a necessary emanation and expansion of the substance of the Ones own being, and the role of the Soul, the third hypostasis of the Plotinian Divinity. This cosmic soul of the world vivifies and binds in harmonic sympathy all the things of the universe, preserving them from matters tendency to disperse and dissolve. The tight correspondence between the cosmic soul and the human soul, and the proportional relations between macrocosm and microcosm deriving therefrom, will furnish the elements for many animistic and vitalistic ideas that, passing through the Middle Ages and the Renaissance, will reach all the way to the Age of Romanticism.

Plotinus held that, if matter were totally independent from God, and thus coeternal and co-existent with Him, then something would be lacking to God and one would fall into a clear contradiction. God would no longer be pure act as Aristotle understood him to be, but only being in potential and therefore not immutable, but becoming (cf. Enneads, III, 7,5). Since, for motives stemming from an ontological foundation, one cannot deny the existence of an immutable being, it is then necessary that God be the producer of matter in some way. If then the substance of the world was not co-eternal and co-original with the Divine as Plato and Aristotle held, for whom the one could not exist without the other, it was necessary that matter was produced by God, which according to Plotinus would happen by emanation.

At the summit of the Plotinian system, there is the One that wishes and determines itself to be exactly as it is. Perfectly simple and infinite, the One transcends every separation, containing in itself all things and therefore able to bring them into existence. Hence in the Plotinian One, there exist two activities: one that consists in positing its own essence, and one that brings about the emanation of all things from the One. The processes of this emanation lead in the first place to the formation of the Nos or the Intellect, from which, also by emanation, comes the Soul. This third hypostasis after the One and the Intellect is the last reality of the incorporeal world proceeding from the One, and in its turn is the generator of matter, or rather of the first reality of the corporeal world. Matter is a simple receptacle in which the forms and beings of the world transform themselves and deteriorate. In the Plotinian idea, matter is the last stage of the emanative process from the One, the product of the Ones total emptying out and therefore of its maximum privation. But by identifying the One with the Good, matter is seen as privation, as something negative, although without going so far as to identify it with evil, as happens in the radical dualism of the gnostics. In giving origin to the world, however, the One of Plotinus doesnt turn towards what it produces: the world does not arise from a free act (and therefore neither is it loved as such); rather it is produced out of necessity of the superabundance of the same One, just as it belongs to the nature of light to diffuse itself and illuminate all things. Although the world comes about after the generation of Intellect and the Soul, it is in continuity with the sphere of the Divine and it is part of its substance.

4. Creation Out of Nothing: God Participates Being to Creatures. Although the pantheism of the ancient world is also representative of a philosophical journey, in the first place it expresses a religious idea, the core of which, especially in the Eastern matrix, is that the sacredness of nature is seen as divinity. Opposed to such a perspective is the doctrine of creation ex nihilo by the One God, present in the sacred texts of the Hebrew tradition and which the Christian religion enriched in the following centuries with a specific philosophical-theological depth. One of the first systematic approaches appeared in the first centuries of the Christian era due to the critique of gnosticism. In a dialogue with Platonic philosophy, Theophilus of Antioch (ca. 120-185) traces out a quick sketch: All things God has made out of things that were not [...]. But Plato and those of his school acknowledge indeed that God is uncreated, and the Father and Maker of all things; but then they maintain that matter as well as God is uncreated, and aver that it is coeval with God. But if God is uncreated and matter is uncreated, God is no longer, according to the Platonists, the Creator of all things, nor, so far as their opinions hold, is the monarchy of God established. And further, as God, because He is uncreated, is also unalterable; so if matter, too, were uncreated, it also would be unalterable, and equal to God; for that which is created is mutable and alterable, but that which is uncreated is immutable and unalterable. And what great thing is it if God made the world out of existent materials? For even a human artist, when he gets material from some one, makes of it what he pleases. But the power of God is manifested in this, that out of things that are not He makes whatever He pleases. (Ad Autolicum, I, 4, and II, 4). The doctrine of creation out of nothing and therefore of the clear separation between God and the world also appears in other Christian authors: Origen, Athanasius, Hippolytus of Rome, and, in explicit tracts, in the Shepherd of Hermas, a work of the 2nd century: You must believe that there is one sole God that created and brought to completion everything and who made from nothing that which exists. (Mandata, I,1).

Thanks above all to Augustine of Hippo (354-430), creation out of nothing and the metaphysical irreducibility between God and the world are formulated upon rigorous philosophical foundations, especially in the context of the problem of the nature of time. Our rational soul bears witness to the perception we have of passing and of remaining (cf. Confessiones, XI, 20). But what holds for our soul must hold also for the whole cosmos, which must therefore be understood in temporal terms. Creation can only be conceived by us a succession of temporal events. The world, affirms St. Augustine, was created with time and not in time (cf. Confessiones, XII, 29,40; De civitate Dei, XI, 6), proof of an ontological separation between God and creation. The Eternal God, Being and the Supreme Good (cf. De natura boni, 19; De Trinitate, VIII, 3,4) is contrasted to the temporal things created by Him, and therefore fully distinct from them. Given his Neo-platonic formation, Augustine presupposes that creatures participate in Being and the Good by means of creation. He keeps a necessary distance from the Neo-platonic doctrine of emanation, however, and does not hold that such participation takes place as a necessary flowing out, as if God could not exist without a created world. His criticism of the pantheistic vision of those who worship nature, holding her to be the soul or the body of God, is explicit; after having upbraided those who worship the Soul or the Intellect, considered by the Neo-platonists to be the first creatures of God or the various elements of creation, above all the celestial bodies, Augustine adds: But those think themselves most religious who worship the whole created universe, that is, the world with all that is in it, and the life which inspires and animates it, which some believe to be corporal, other incorporeal. The whole of this together they think to be one great God, of whom all things are parts. They have not known the author and maker of the universe. So they abandon themselves to idols, and, forsaking the works of God, they are immersed in the works of their own hands, all of them visible things. (St. Augustine, De vera religione, 37, 68).

Thomas Aquinas (1224-1274) sustained the same anti-dualistic and anti-pantheistic line. In his thought, the Augustinian doctrine and the Aristotelian reflection flow together. For Aquinas, God is the Being simply-speaking, i.e., Being simpliciter, He who exists for Himself, but is also one who communicates and gives himself to the world. Creating everything, God communicates himself and his own Being with sovereign liberty. Created beings are essentially different from their Creator, but they are similar to him in a certain sense, given that they participate each according to its own level of existence in the being they receive by God. Aquinas will explain this thought many times in light of the doctrine of participation of being (cf. Summa theologiae, I, qq. 44-45). The creature participates in the being that God possesses in fullness, taking part in it without being a part of it. If one predicates of God the being of everything, it is not because he is the essential constituent of everything but because he is the root cause of whatever exists. Concerning the well-known idea that God is present in all things by power, presence, and essence, Thomas will specify that God is in all things by His essence, in as much He is present to all as the cause of their being, adding that God is said to be in all things by essence, not indeed by the essence of the things themselves, as if He were of their essence; but by His own essence, because His substance is present to all things as the cause of their being. (Summa Theologiae, I, q. 8, a. 3; cf. In I Sent., d. 8, q. 1, a.2). Thanks to his doctrine of the absolute simplicity of God, Thomas cuts every form of pantheism off at the roots, refuting in particular those forms present in the teachings of other philosophers of the same epoch, as, for example, in the materialism of David of Dinant (cf. Summa theologiae, I, q. 3, aa. 6-7 and a. 8, ad 3um), which will later inspire in part the pantheism of Giordano Bruno.

As a theory about nature, pantheism is principally tied to Neo-platonic thought. In an historic era, such as the Renaissance, that saw the rediscovery of this philosophical current, there was in fact also a rebirth of pantheism. In the naturalistic-scientific context of the Renaissance, the study of natural phenomena did not yet enjoy an autonomous and rigorous method, and pantheistic thought manifested itself in the tendency to conceive God as the universal animation of nature. Two authors stand out from the others in this new current: Giordano Bruno and Tommaso Campanella. The thinkers of the Renaissance saw nature as a living organism, whose parts are reciprocally dependent upon one another, as a succession of phenomena that move toward their proper end, impelled by some interior principle. This idea shapes all the philosophy of nature of the 16th century after Agrippa of Nettesheim (cf. De occulta philosophia, 1510), who held that one can think of the universe only if it is endowed with an independent soul. The revival of this idea of anima mundi, which will prepare the way for the establishment of a new concept of nature, will also be the gateway for new forms of pantheism.

1. Tommaso Campanella and Giordano Bruno. In the line of this progression stands the thought of Tommaso Campanella (1568-1639), whose philosophy was inspired by the physics of Bernard Telesio (1509-1588) even if he ultimately distanced himself from it. The suggestion of a singular empathetic relation between human beings and the world emerges already in Campanellas philosophy of knowledge. He holds that the sole way of knowing is a kind direct contact with all things. Our ability to comprehend and understand reality is realized in a sensible way, through sapere (to know, to taste): the subject of this act makes his own the flavor of things, tasting or savoring them. The idea of sensible knowledge here is not that of empiricism, which remains extrinsic in character; it is, rather, a kind of intrinsic operation, a participation in the thing and in that innermost part of the thing, which is the same expressive process of God, the acting of the divine, the Being which conforms with Power and Love. It is not a seeing or a glancing, reproducing images, but a penetrating of the vital process of all, in short, a tasting of the sweetness of universal life. (E. Garin, LUmanesimo italiano, Bari 1965, p. 249). Consistently, Campanella affirms that everything possesses a sensibility and is subject of a certain knowledge, even if confused, of itself and of the external world, which permits it to love other beings and to remain in harmony with them through a universal empathy. Advocate of a characteristic magical animism, Campanella holds that the world is sustained above and beyond this sensibility of individual things by its own proper Soul, an instrument by which God directs all operations (cf. De sensu rerum et magia, II, 32). The task of this Soul of the world is precisely to determine the agreement, the concord, among all things and to dispose them towards a single end. Even if he views the world in a pantheistic perspective, Campanella doesnt wish thus to deny a final cause: he actually assigns it primacy over all others. As paradoxical as it might seem, the system of Campanella was intended to remain theological and to seek in nature a demonstration of the presence and action of God.

The pantheistic position of the philosophy of Giordano Bruno (1548-1600) is quite complex. His scheme is similar to that of Plotinus, acknowledging a universal Intellect, but understood from a totally immanent standpoint; a Soul of the world, which gives form to everything; and Matter, which acts as a receptacle for these forms. The Aristotelian philosophy of composition from matter and form, and of potency and act having been supplanted according to the sensibility of the late 1500s and the harbingers of what will slowly become the new physics, it is precisely the notion of form that Bruno tries to re-read in a new light. In Brunos thought, the forms more and more resemble vital principles, changeable and changing, while matter is no longer seen as the element that is indeterminate and that limits, but rather as a living potentiality. According to Bruno, the finite being cannot simultaneously be all that it is in its nature and its essence, but since it has within itself the force and the seed of all its future forms, it is in this respect infinite. Ascribing the capacity of infinity to every finite being, Bruno intends to propose a new philosophical conception of matter, which would no longer receive its form from without, but rather from an innate and interior force. It is not the form that embraces and constrains matter, rather it is matter itself that expands and develops itself in ever new forms. Matter therefore is not solely a relation between potency and act, but a living seed that develops itself in all things. The true reality owes nothing, says Bruno, to any of the forms in particular; it owes nothing to anything. It simply is of an essence to unify in itself the unlimited multiplicity of all measures, all figures, and all dimensions. This concept of reality is the overthrowing of the Aristotelian concept of individual substance, bond by limitations of space and time, a substance that cannot encompass within itself the totality of the possible manifestations of being (cf. De immenso et innumerabilibus, Book VIII).

Indeed, the pantheistic vision of Bruno is most evident in his theory regarding the relationship between infinity and finitude. For Bruno, the infinity of the effect, i.e., of the universe, derives from the infinity of the First Cause. The world, indeed an infinity of worlds, all proceed ab aeterno from God. The universe is one, infinite, and immobile; its life is the divine life, because in all its parts, it is an effusion of Gods life. God and the universe are not infinite in the same way, inasmuch as God is all-infinite and totally infinite (i.e., He is totally present in every part of the infinite whole as its cause), while the universe is all-infinite but not totally infinite (because it is not wholly present in each of its infinite parts). However, nothing impedes the finite from being able to incorporate into itself what is proper to the infinite: the attributes of Brunos universe end up coinciding with those of Parmenidean Being to the point that they are confounded with the attributes of God.

In addition, the pantheism of Bruno manifests a gnoseological motivation. While in Galileo, it is the universality of the language of mathematics to which the whole universe is subject without any limitation, in Bruno it is the universality of substance, whose unity and eternity are not apprehended by the senses, but by the Intellect. This process is not synonymous with abstraction or immateriality, however, because intelligibility no longer belongs principally to act or form, since form and matter are simply two aspects of the same substance, nature.

2. The Pantheism of Spinoza. Baruc Spinoza (1632-1677) sought to restore the unity of being that had been shattered by Ren Descartes. By pursuing an extreme synthesis between metaphysical-theological thought and geometric-scientific thought, Spinoza set up a single substance in opposition to the three substances of the French philosopher, the res cogitans, the res extensa, and the res divina (the last being God as the foundation of knowledge). The res cogitans and the res extensa are two of the infinite attributes of the unique substance that, in Spinoza, indicates no other reality than God himself. Although ideas and things present themselves as singular concrete realities, they are instead modes of this substance, such that its universal character grounds their intelligibility. Nothing can exist outside of God except as a mode or attribute of God. God becomes the source and the reality of all reality; He alone is that unity (in the Neo-platonic sense) capable of guaranteeing all multiplicity (cf. Ethica, Book I).

The desire to allow any other form of becoming is the desire to imagine another God (cf. Ethica, I, 33). From this perspective, the philosophy of nature becomes identical to metaphysics. The famous equation of Spinoza of Deus sive natura (God, or nature) finds here its perfect setting. Contingency having been denied, nature assumes the character of divine necessity, a key element to Spinozas entire system. Such necessity in nature does not regard the infinite sum of all singular things, but rather the necessity binding one to another. God is the substance with all its infinite attributes; the world is the sum total of all the modes, finite and infinite, of the being of the substance (that is, ultimately, of God). Thus, in this universe there is no room for contingency: all becomes a necessary consequence of the necessity of God. However, the pantheism of Spinoza will acknowledge a distinction, at least a weak one, between God and the world: if the first has free necessity, the second possesses a determined necessity; if the first is the subject of infinite attributes, the second is the expression of these within the infinite modes of being of the substance; if the first is natura naturans, the second is natura naturata. However, this distinction is not a metaphysical detachment: nature is an effect present in the cause and contained within it, according to the principle that all is in God.

If Spinozas thought is more inclined to divinize matter due to its Cartesian point of reference and a certain dialogue with scientific thought, the pantheism deriving from the German idealism of Hegel, Fichte, and Schelling finds itself in a historic and romantic ambience and proceeds instead to divinize form as the mode of being par excellence. Spinoza remains one of the philosophers most often cited by scientists, although he is often known only shallowly by them. Among others, Albert Einstein has left us explicit references to the God of Spinoza (cf. The World as I see it, 1935).

3. The absolute space of Isaac Newton. The concept of the relation between God and nature proposed by Isaac Newton (1642-1727) has been interpreted in various ways, oscillating between a judgment of deism and one of pantheism. When he explicitly deals with the problem of God, especially in his works, Principia Mathematica Philosophiae Naturalis and Opticks, the English scientist speaks more often of a God who orders and organizes than of a God who creates. Often calling Him the Lord of all (Gr.Pantocrator), Newton, in an effort to explain the meaning of this attribute, affirms that God-Lord is a relative term that makes reference to a servile subjection. Deity is the dominion of God, not over his own body, as those imagine who fancy God to be the soul of the world, but over servants. Newton continues, The word God usually signifies Lord; but every lord is not a God. It is the dominion of a spiritual being which constitutes a God []. He is Eternal and Infinite, Omnipotent and Omniscient; that is, his duration reaches from Eternity to Eternity; his presence from Infinity to Infinity; he governs all things, and knows all thing that are or can be done. He is not Eternity or Infinity, but Eternal and Infinite; he is not Duration or Space, but he endures and is present. He endures forever, and is every where present; and by existing always and every where, he constitutes Duration and Space. (I. Newton, The Mathematical Principles of Natural Philosophy, trans. by A. Motte [London: Dawsons, 1968], pp. 389-390). The God of Newton appears distinct from His creation, which He sustains and justifies without being identified with it.

Setting aside the problem of what type of theism is represented in Newtons thought and what conformity there is between his notion of God and the biblical image of God presented by Revelation, one needs to keep in mind that several authors have sensed a certain kind of pantheism in Newtons idea of absolute space: the space of Newtonian physics is then understood as an attribute or an extension of God (cf. Jammer, 1969). In this interpretation, absolute space and time would ultimately be the sensorium of God (God's omnipresence in space through his Spirit) thus establishing an equivalence between what is absolute and the divine. Nevertheless, as Max Jammer specifies, it seems that Newton was aware how he could easily be misunderstood and placed among the pantheists of his times, who were identified with the atheists by orthodox circles. Besides, Newton used the word sensorium merely to make a comparison, and he did not identify space as an organ of God.

Moreover and to be precise, the absolute space of Newton is not the mere extension proposed by Descartes, who identified it with bodies. Newtons space is something distinct from matter, and its principal function is to explain the attraction of bodies at a distance. In a letter to Richard Bentley (1692), Newton recognizes that gravity must be caused by an agent which acts constantly in accord with determined laws, but at the same time he does not know exactly what it is, preferring to leave to the judgment of my readers to establish if this agent is either material or immaterial. One can also hold that the God of Newton occupies the space of the world to ensure that in it the laws of physics are accomplished in an absolute way; but Gods presence in space the notion of field being yet unknown seems mainly to serve the function of justifying the transmission of an action at a distance, as was the case with gravitation.

In regard to various concepts of nature, philosophy and the sciences have exerted reciprocal influences on each other. Scientific discoveries have many times been stimulated philosophical thought, as in the cases of heliocentrism, the discovery of the quantum nature of radiation, the theories of special and general relativity, and the uncertainty principle. But in its turn, philosophy has furnished scientific discourse with conceptual categories and systematic outlines. In general, this second implication is verified more easily in those fields of science where rapid theoretical developments and less dependence upon observations leave greater space for speculation. Hence, rather than search for the direct influences of philosophical pantheism upon scientific formulations, it appears more appropriate to consider those visions of nature expressed in some scientific speculations that seem to be debtors, in an implicit and indirect way, to certain philosophical ideas about the Absolute and its relation to the world.

The two classical directions taken by philosophical pantheism throughout history the spiritualistic or panpsychist one, in which nature is seen as a great spirit that animates reality (anima mundi), and the materialistic one, in which nature is identified with matter and ends up manifesting the same attributes of the Absolute or divine are also present in certain concepts of nature diffused by contemporary science. I would suggest three major fields in which such pantheistic traits seem to emerge: the cosmic neo-vitalism (to which belong the program of the Gnosis of Princeton and other forms of mysticism of physics), the idea of a cosmic code as an answer to the question of the intelligibility of reality, and, finally, some interpretations of the Anthropic principle. The first can be traced back to a spiritualistic perspective, while the second and third are more in tune with the materialistic perspective. A further example of interaction between philosophical and scientific views is the idea, supported by some scientists, of the world as all in God, with the corresponding involvement of God himself in the world, a vision at times described as panentheism.

1. The Gnosis of Princeton and Cosmic Neo-Vitalism. Today, a number of scientists are searching for new interpretations of reality following two intuitions above all: a) to overcome systems of logic that are considered typical of a western culture, generally based upon the principles of identity and of non-contradiction, upon dialectic opposition and the irreducibility of the contraries; new logical systems are thus invoked, often derived from oriental philosophies and more open to the composition of contraries, to the transformation of identity, and to the possibility of new syntheses; b) to favor relation and interaction as the keys for understanding the properties of the individual, which in reality ultimately cease to be the properties of the individual and become solely the properties of the whole, whose global logic (or even life) determines the behavior and the meaning of the parts. Having been developed in a bizarre and creative way starting from the theory of games and then the theory of paradoxes, these forms of thought called by the end of the 1960s The Gnosis of Princeton or even The New Gnosis by their adversaries at Princeton and Pasadena found a first application in the field of quantum mechanics. With an attitude fluctuating between reserve and secrecy, they rapidly expanded into the fields of biology and cosmology, generating criticism on the part of official science (cf. Ruyer, 1974).

The fundamental thesis of the New Gnosis is similar to that of all gnostic systems: the world is dominated by the Spirit, to which matter is counterposed; but according to the Gnostics of Princeton, the Spirit doesnt find opposition in matter, inasmuch as it is seen instead as the Spirits creature. Material bodies are seen as the by-product of the Spirit, they are the stuff that allows the Spirit, united in all the parts of the cosmos, to be contained. The universe is formed neither of material entities, nor of energies, but rather of domains of consciousness. The universe consists of the forms conscious of themselves and of the interactions that establish themselves between these forms, thanks to their mutual information. The true pieces of information would be those present in the interior consciousness of every being. Scientific observation gathers solely the reverse side of this information, i.e., its bodily and material dimension, but not its right side, represented by the spiritual and relational dimension. The New Gnosis intends to overcome this conceptual barrier, in which conventional science remains ensnared, to gain access to the innermost, relational dimension of the object, but to accomplish this goal, it is necessary to recognize that every object observed has its own life and its own consciousness. Some entities, such as human beings and animals, would be able to communicate this consciousness of theirs, this hidden right side, while other beings would not, although they also possess their own innermost dimension, their proper right side. Except for artificial entities and entities that appear by chance, individual elements and even the individual elementary particles possess a conscious dimension. The real world is generated by all these infinite processes and relations; ignoring them would result in not knowing the world in its profundity, because one would be ignoring the soul.

The vision of a cosmic neo-vitalism, which one may encounter well beyond the confines of the Gnostics of Princeton, thus becomes more and more distinct: self-regulation, coordination, and homoeostasis of complex material systems such as the Earth are now seen as manifestations of a true life (cf. Lovelock, 2000). Not only would the various material elements and the biosphere have their own life, but the whole universe would definitely possess the personality of a living being, capable of constructing its own history (cf. Smolin, 1997). According to some authors, the progressive and irreversible growth of information in the world assumes the role of a Soul or of a cosmic Spirit, to whom is entrusted the task not only of regulating the processes of matter (by transcending it or at least by uncoupling itself from matter), but also of guiding the entire evolution of the universe towards immortality (cf. Tipler, 1994).

It is difficult to formulate a comprehensive and pondered judgment about such visions of nature due to the heterogeneity (and at times the navet) of the different proposals. There is another factor not to be undervalued: these proposals arise from an exigency of a post-modern surmounting of some forms of modern rationality, as reductionism and materialism, understood today to be inadequate. But the search for new philosophical paradigms is not devoid of a certain ambiguity. Some authors indicate that the greater importance given to the creative evolution of complex systems is the necessary overcoming of a theistic vision that will finally free us from the idea of a God who rules the fortunes of the world (cf. Smolin, 1997); yet others hold it to be in accord with the existence of purpose in the universe, and therefore with the idea of a Creator (cf. Davies 1987 and 1992). Rather than judging between these two positions, it is much easier to point out the fact that evident connections exist between many neo-vitalistic concepts and some characteristic elements of the New Age, of which the Gnosis of Princeton almost appears to be a faithful application in the scientific field. There is also an implicit relationship among hermetism, Renaissance vitalism (see above, II.1) and modern ideas of emergence or creativity in nature. These paradigms appear and disappear in various forms in scientific reflections, perhaps to demonstrate that they contain meaningful insights, which modern positivistic rationality had thought to be able to elude in modes perhaps overly-simplistic. Nonetheless, it must be said that the overcoming of reductionism and other analytical methodologies in favor of new visions of nature characterized by a synthetic and holistic approach, one more attentive to the relational properties and to the synthesis and harmony of the whole, ought not to lead to a refutation of a foundational logic or of a first philosophy that ultimately unites both Western and Eastern philosophical traditions. These two traditions should not be seen in an antagonistic way; otherwise philosophy itself would suffer a dangerous loss of identity, as would, in the ultimate analysis, the universal communicability of scientific thought.

2. The Cosmic Code and Immanent Evolution towards the Emergence of Consciousness. Besides those who speak of a cosmic Soul, various contemporary authors refer to a cosmic mind. Here there is a restructuring that unknowingly follows closely the role of the two Plotinian hypostases generated by the One (see above, I.3) and that, also unknowingly, maintains a constant bond with the two classical ways of seeing the divine in the world: as Intellect and as Spirit. The reference to a cosmic mind or even to a cosmic code is usually presented in the context of the question about the intelligibility and rationality of thelaws of nature, the question about the reason for the harmony among the various parts of the universe and, finally, the discussion about the delicate coordination (fine-tuning) among the many parameters and numerical constants that determine the structure and evolution of the cosmos.

If on the scientific level such observation is completely licit and finds its place within a Pythagorean tradition, one associated with every mathematical reading of nature that sees the world as an expression of harmony and order (gr. ksmos), on the philosophical level and from a realistic perspective, it can follow two diverse approaches. (Observe, by the way, that from an idealistic perspective the idea of cosmic order would be described as solely an apparent order, something imposed by the mental categories of the subject.) The first realist philosophical approach is to hold that such rationality is a reflection on the objective level of the world and its phenomena of an ordering Intelligence that, transcending the universe, possesses it and unfolds the entire cosmic project. Even if subjected to diverse developments one can indeed arrive and stop at a kind of God, typical to deism, who is both architect and impersonal, or one can remain open to the Revelation of a God who is the source for the order, but also personal and salvific such a perspective points towards something beyond the universe itself. In other words, the observation of the hallmarks of intelligence in the world leads us to ask for their cause.

The second approach, although it also recognizes the existence of an order and of a delicate fine-tuning in the structure of the world, does not think it necessary to invoke some Intelligence that transcends the universe, but understands these simply as the self-expression of a necessary and immanent law embedded within the very conditions of the existence of the world as such. The universe is nothing more than its laws or its project: if there is any reference to a notion of God or of the divine, this notion ends up being identified with the laws of nature, which in their turn are identified with the universe itself. The mind of God is substantially the mind of the universe. In this case, the cosmic Intellect in nothing but the modern expression for a materialistic pantheism in which there is the theoretical possibility of recognizing a kind of lgos, but which in practical terms belongs to matter itself. Thus, the weak separation between matter and spirit, or between matter and reason, to which the idea of a cosmic code seems to lead, now completely disappears into the rule of a strict materialistic monism.

Something similar takes place in regard to the possible interpretations of the Anthropic principle. The scientific data demonstrating that the delicate conditions for the existence of the cosmos as it is are also the same that allow it to be suitable for life, can be understood philosophically in two different ways. They could be taken as consonant with the idea that there exists an original project for the world, a project that is also distinct from the world, whose final purpose is the creation of the necessary conditions to allow intelligent observers to appear in the universe. On the other hand, the same data could be interpreted to demonstrate that cosmic and biological evolution has within itself a fully immanent code (similar to DNA in its own significance for the development of a living organism), the program of which consists in structuring the universe by leading it to the appearance of intelligent (human) beings, so that the universe might finally arrive at being conscious of itself. In the first case, the universe would have been made for humans, in the second, humans for the universe... In this second case, we stand before a new pantheistic vision of a materialistic nature: consciousness particularly that of human beings would be the necessary and sufficient end result of the evolution of matter, and, from the moment of the appearance of intelligence, matter would come to be totally permeated and encompassed by it.

From a conceptual point of view, I would suggest that the affirmation of a materialistic pantheism bound to the presence of a cosmic mind or code presents an inherent contradiction: it dissolves the question about the cause of the intelligibility of reality, i.e. the very question that gave rise to our reflection on the existence of such a mind or of a code. Intelligibility becomes the simple fruit of evolution and no longer the possibility to rise above it by posing problems and asking questions. In addition, there exist arguments confirming the fact that the intelligibility of nature is difficult to ascribe to the laws of natural selection or of cosmic evolution. The issue shows a certain astonishing analogy with the irreducibility of the relationship between mind and body.

Concerning the second interpretation of the Anthropic principle, which says that the appearance of human beings is the necessary product of a deterministic self-expression of the universe, one needs to keep in mind that the scientific observations that are the foundation for the reflections about the fine-tuning between the universe and life regard necessary but not sufficient conditions: to allow life to flourish, it is necessary that the universe is just as it is, but being just as it is, is not sufficient to bring about in a deterministic way the existence and flourishing of life, the originating causes of which are, at least up until this moment, still unknown.

3. The Proposal of Panentheism. The ontological and dynamic consistency that scientific thought tends to grant to nature in its relationship with its possible Creator leads at times to the hypothesis of a kind of feed-back of nature to God. One thus arrives at the idea of a more active role for the world, and therefore of a certain active polarity between God and the world, postulating a participation of God in the dynamic of nature and its processes. In broad strokes, one may say this leads to seeing the world as a part of God, more precisely as a whole within God, thus giving rise to the term panentheism. This idea, probably already present in the philosophy of Heraclitus (550 ca. - 480 ca.) and later favored by the thought of Spinoza (see above, II.2), by the idealism of G. Hegel (1770-1831), and by the cosmic evolution of H. Spencer (1820-1903), was implicitly revived recently in some forms of thought inspired by A.N. Whitehead (1861-1947) and his philosophy of process.

Panentheism sees the world as inserted into the nature of God, into his being and his life. God maintains a certain priority over the world, as the whole does over its parts, but he cant avoid depending upon it to a certain degree. The divine perfections and the other divine attributes grow together with the world; God needs to take into account the properties, the potentialities, and the processes of the world if he wishes to bring to fulfillment not only his creative project, but also that which is lacking to his own fulfillment. The idea of creation from nothing and the sovereignty of God over all things certainly become obscured here, and Gods relationship to the universe resembles more that of a pilot commanding a ship in a storm than that of a transcendent Creator. Gods creativity would then depend on the worlds creativity and emergence, whose future developments and results would be unknown to God himself. Panentheism does not deny the personal nature of God nor human freedom (a freedom participated in some way by the whole universe), but the theological understanding of the perfection of Gods freedom is nonetheless profoundly modified. Favorable to a revision of the philosophical idea of the immutability of God, panentheism does not have the theological tools necessary to address or redirect the implications of this idea for the mystery of the Incarnation (or for some kind of theologia crucis within it) or for the biblical meaning of such notions as divine mercy and fidelity. Instead, it seeks an easy solution on the merely physical level, one attractive for the role of protagonist played by the evolution of the cosmos and by its laws, but damaging to the image of God and ultimately also to the right understanding of the reality itself of the universe. If the legitimate desire to grant nature its own autonomy and allow it to participate in some way in the divine attributes is carried out in disregard of any metaphysics of being (the entire dynamic significance of which the proponents of panentheism often ignore) and without affirming a true creation ex nihilo, this way of thinking leads to the idea of a world that grows together with God, and ends up sooner or later replacing Him.

By affirming a substantial, and therefore metaphysical, separation between God and the world, Judeo-Christian Revelation does not take away the sacredness of nature. That is, there exists much room for a sacred vision of nature, though not a religion of nature. Nature is not the ultimate source of its sacredness, because it is only a reflection of the holiness and beauty of God. In many texts of Genesis, one reads how God blesses his creation (cf. Gen 1:22-28; 8:17; 9:1 etc.), a creation many times recognized as a good thing, seen also as beautiful, a creation which the original sin of human beings has in part disfigured by distorting its primal harmony, but which the salvation worked by Jesus Christ will recapitulate and reconcile in a new salvific economy. The created world is called to participate in this economy, the first fruits of which are now present and at hand in the historic and meta-temporal event of His resurrection from the dead (cf. Rm 8:19-22).

As has already been seen, the separation between the God of Israel and the world, and His complete ontological diversity expressed in the doctrine of creation from nothing (see above, I.4), do not prevent the created world from being similar to God, nor God from being present in His creatures. Here there is a true philosophical novelty thanks to the biblical significance of the notions of immanence and transcendence, which are richer than those used by philosophical thought. Among Christian thinkers, Thomas Aquinas developed a metaphysics capable of taking advantage of the mutual immanence and transcendence of God, especially in the formulation of the doctrine of participation and of the intensity of the act of being (cf. Summa Theologiae, I, qq. 44-47). The theological perspective of the relationship between God and the world was summed up by John Paul II in one of his catecheses on creation: As Creator, God is in a certain sense outside of created being and what is created is outside of God. At the same time the creature fully and completely owes to God its own existence (its being what it is), because the creature has its origin fully and completely from the power of God. Through this creative power (omnipotence) God is in the creature and the creature is in him. However, this divine immanence in no way diminishes God's transcendence in regard to everything to which he gives existence. (John Paul II, General Audience, January 15, 1986, n. 6)

The Christian universe is similar to its Creator not because it is a necessary emanation (Plotinus), nor because a Demiurge took its forms from the world of Ideas to later reproduce them in the world of nature (Plato). As opposed to the hypostases of Plotinus and the Demiurge of Plato, the Word and the Holy Spirit are not the hinge between the divine and the earthly, nor are they the first creatures of the One. The logic with which the Christian Trinity gives origin to the world resides totally in the liberty of its immanent life. Here the philosophical opposition between necessity and freedom is overcome in the mystery of a personal communion, of the free gift of three distinct Persons made possible by the necessary identity of the same divine nature. Having willed all things in His Word and through His Word, but also for Love and with Love, Godgives origin to a world in which the logic of the Trinitarian processions acts as the exemplary causality, without any of the Divine Persons entering into composition with creatures. The processions of the divine Persons (generation and spiration) are seen by St. Thomas as the cause and reason (Lat. ratio) of the creation of creatures, or said more precisely: The procession of the Persons in the unity of their essence is the cause of the procession of creatures in the diversity of their essence. (Exitus Personarum in unitate essentiae est causa exitus creaturarum in essentiae diversitate) (In I Sent., d. 2, divisio textus; cf. ibidem, d. 14, q. 2, a. 2).

The fact that the universe is called to be completely renewed in Christ though the recapitulation the Son mysteriously brings about, or the fact that the Spirit continually vivifies creation, sanctifying it, is never expressed in Sacred Scripture in a way that would lead one to think of the world as the body of God or of the Spirit as its soul. The teaching of the Church has many times clarified the content of faith in regard to possible pantheistic misinterpretations. Thus the thesis, attributed to Peter Abelard (1079-1142), that the Holy Spirit was the soul of the world (cf. DH 722), was condemned; and likewise the theses, attributed to Meister Eckhart (ca. 1260-1327), that God created the world at the same time as the generation of the Son, and that the human soul possessed something uncreated and in common with the divine intellect (cf. DH 953, 977). A more organic and complete clarification regarding pantheism will come with the First Vatican Council (1870). Without making explicit reference to specific authors, the Council censured pantheistic visions of the emanatistic type (Plotinus), the substantial (Spinoza), the essential (Schelling), and the universal or indefinite (Hegel). The philosophical causes (ontological, psychological, and ethical) underlying these erroneous understandings of creation were also pointed out and denounced: to deny the distinction between Creator and creature, to deny the liberty of God, and to ignore the true end of creation itself (cf. DH 3024-3025).

Christianity is honestly concerned with the necessity, particularly urgent in more recent decades, to rediscover a better harmony between humankind and nature, for this belongs to the biblical message and to its tradition of thought. We certainly stand in front of a trend felt in many areas of contemporary culture, urged not only by the new scientific epistemologies, but also by the now inevitable worldwide context of social, economical, and technological politics. At the same time, Christianity stresses that such a concern ought not to turn itself into a cosmo-centered anxiety. Sacred Scripture recalls that humanity does not find its tlos (i.e., its end)in the search for an harmony with nature: the quest for this necessary harmony, whether on the personal or societal level, does not provide responses to the great enigmas of human existence, nor does it furnish the ultimate answer about the role of the human being in the universe. Humanity and nature, even in their autonomy, both depend upon God. From a theological point of view, it must be added that nature alone does not save; and this is, perhaps, the greater difference between Christianity and the perspective of Buddhism or of those philosophies inspired by it. Nature can contribute to our salvation insofar as it leads us to God, that is, in the measure to which it shows us the existence of a Creator through aesthetic or rational appeal (natural revelation) and, therefore, only in the measure to which nature remains capable of referring to something beyond itself. In this sense, one could say that Christianity is not in much agreement with the idea of a Mother Nature, but holds rather to that of a Sister Nature, to whom we are bound because we see in her a common dependence upon the Creator (cf. John Paul II, General Audience, January 26, 2000). These insights have been present within the core of Christian message from the beginning and found in St. Francis of Assisi one of their best witnesses. If the Canticle of Creatures will call only the Earth mother and this in the precise context of the production of fruits necessary for sustenance, all other natural realities are seen with a fraternal eye, which recognizes them as participants in a common filiation from God: May you be praised, my Lord, with all your creatures, especially brother sun [...]. May you be praised, my Lord, for sister moon and the stars [...]. May you be praised, my Lord, for sister water [...]. May you be praised, my Lord, for brother fire... (Laudato sii, miSignore, cum tucte le tue creature, specialmente messer lo frate sole [...]. Laudato sii, miSignore, per sora luna e le stelle [...]. Laudato sii, miSignore, per sora aqua [...]. Laudato sii, miSignore, per frate focu...).

The contemplation of nature and the search for the divine within it have a very important role in inter-religious dialogue. Christian theology is interested in developing a more mature and articulate reflection in this area, just as it is convinced that the true God can be known starting from the observation of nature. Judaism and Christianity, the Koran and Eastern religious traditions, philosophical thought and the natural religions, all encounter one another in the praise of God in creation (cf. John Paul II, General Audience, August 2, 2000). Christianity engages in this dialogue with what is specific to her: nature is the first stage of divine Revelation (cf. Fides et ratio, n. 19), creation is the work of the Trinity, the contemplation of the created world moves the believer to the praise of the One and Triune God without stopping at the idea of an anonymous and impersonal sacredness. It is the Holy Spirit, who is indeed the Spirit of the Father and of the Son, the third Person of the Holy Trinity, who orients such praise and contemplation, and who also guides the dialogue between Christian believers and believers of other religions starting from the common observation of nature. In the light of the Christian faith, creation particularly calls to mind the Holy Spirit in the dynamism that marks the relations between things, within the macrocosm and the microcosm, and is apparent especially wherever life is born and develops. Because of this experience, even in cultures far removed from Christianity, the presence of God is perceived in a way as the spirit which gives life to the world. Virgils words are famous in this regard: spiritus intus alit the spirit nourishes from within (Aeneid, VI, 726). The Christian knows well that this reference to the Spirit would be unacceptable if it meant a sort of anima mundi taken in a pantheistic sense. However, while excluding this error, it remains true that every form of life, activity and love refers in the last analysis to that Spirit who, as Genesis tells us, was moving over the face of the waters (Gn 1:2) at the dawn of creation. (John Paul II, General Audience, August 2, 2000, n. 5).

Read the original:

Pantheism | Inters.org

Virtual reality | computer science | Britannica.com

Virtual reality (VR), the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment. VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive information and are worn as goggles, headsets, gloves, or body suits. In a typical VR format, a user wearing a helmet with a stereoscopic screen views animated images of a simulated environment. The illusion of being there (telepresence) is effected by motion sensors that pick up the users movements and adjust the view on the screen accordingly, usually in real time (the instant the users movement takes place). Thus, a user can tour a simulated suite of rooms, experiencing changing viewpoints and perspectives that are convincingly related to his own head turnings and steps. Wearing data gloves equipped with force-feedback devices that provide the sensation of touch, the user can even pick up and manipulate objects that he sees in the virtual environment.

The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense, the National Science Foundation, and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics, simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.

Read More on This Topic

electronic game: Networked games and virtual worlds

During the 1990s and 2000s, computer game designers exploited three-dimensional graphics, faster microprocessors, networking, handheld and wireless game devices, and the Internet to develop new genres for video consoles, personal computers, and networked environments. These included first-person shootersaction games in which the environment

Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of mediafrom futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatresover the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York Worlds Fair by Fred Waller and Ralph Walker, originated in Wallers studies of vision and depth perception. Wallers work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Traineran example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.

Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a cinema of the future. By late 1960, Heilig had built an individual console with a variety of inputsstereoscopic images, motion chair, audio, temperature changes, odours, and blown airthat he patented in 1962 as the Sensorama Simulator, designed to stimulate the senses of an individual to simulate an actual experience realistically. During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted stereoscopic 3-D TV display that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.

The seeds for virtual reality were planted in several computing fields during the 1950s and 60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called light guns). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.

During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a man-computer symbiosis and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.

Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MITs Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPAs premier research centres. In 1965 Sutherland outlined the characteristics of what he called the ultimate display and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Robertss Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbarts invention of a new input device, the computer mouse.

Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilots head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called augmented reality because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearers ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewers immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.

An important area of application for VR systems has always been training for real-life activities. The appeal of simulations is that they can provide training equal or nearly equal to practice with real systems, but at reduced cost and with greater safety. This is particularly the case for military training, and the first significant application of commercial simulators was pilot training during World War II. Flight simulators rely on visual and motion feedback to augment the sensation of flying while seated in a closed mechanical system on the ground. The Link Company, founded by former piano maker Edwin Link, began to construct the first prototype Link Trainers during the late 1920s, eventually settling on the blue box design acquired by the Army Air Corps in 1934. The first systems used motion feedback to increase familiarity with flight controls. Pilots trained by sitting in a simulated cockpit, which could be moved hydraulically in response to their actions (see photograph). Later versions added a cyclorama scene painted on a wall outside the simulator to provide limited visual feedback. Not until the Celestial Navigation Trainer, commissioned by the British government in World War II, were projected film strips used in Link Trainersstill, these systems could only project what had been filmed along a correct flight or landing path, not generate new imagery based on a trainees actions. By the 1960s, flight trainers were using film and closed-circuit television to enhance the visual experience of flying. The images could be distorted to generate flight paths that diverted slightly from what had been filmed; sometimes multiple cameras were used to provide different perspectives, or movable cameras were mounted over scale models to depict airports for simulated landings.

Inspired by the controls in the Link flight trainer, Sutherland suggested that such displays include multiple sensory outputs, force-feedback joysticks, muscle sensors, and eye trackers; a user would be fully immersed in the displayed environment and fly through concepts which never before had any visual representation. In 1968 he moved to the University of Utah, where he and his colleague David Evans founded Evans & Sutherland Computer Corporation. The new company initially focused on the development of graphics applications, such as scene generators for flight simulator systems. These systems could render scenes at roughly 20 frames per second in the early 1970s, about the minimum frame rate for effective flight training. General Electric Company constructed the first flight simulators with built-in, real-time computer image generation, first for the Apollo program in the 1960s, then for the U.S. Navy in 1972. By the mid-1970s, these systems were capable of generating simple 3-D models with a few hundred polygon faces; they utilized raster graphics (collections of dots) and could model solid objects with textures to enhance the sense of realism (see computer graphics). By the late 1970s, military flight simulators were also incorporating head-mounted displays, such as McDonnell Douglas Corporations VITAL helmet, primarily because they required much less space than a projected display. A sophisticated head tracker in the HMD followed a pilots eye movements to match computer-generated images (CGI) with his view and handling of the flight controls.

Advances in flight simulators, human-computer interfaces, and augmented reality systems pointed to the possibility of immersive, real-time control systems, not only for research or training but also for improved performance. Since the 1960s, electrical engineer Thomas Furness had been working on visual displays and instrumentation in cockpits for the U.S. Air Force. By the late 1970s, he had begun development of virtual interfaces for flight control, and in 1982 he demonstrated the Visually Coupled Airborne Systems Simulatorbetter known as the Darth Vader helmet, for the armoured archvillain of the popular movie Star Wars. From 1986 to 1989, Furness directed the air forces Super Cockpit program. The essential idea of this project was that the capacity of human pilots to handle spatial information depended on these data being portrayed in a way that takes advantage of the humans natural perceptual mechanisms. Applying the HMD to this goal, Furness designed a system that projected information such as computer-generated 3-D maps, forward-looking infrared and radar imagery, and avionics data into an immersive, 3-D virtual space that the pilot could view and hear in real time. The helmets tracking system, voice-actuated controls, and sensors enabled the pilot to control the aircraft with gestures, utterances, and eye movements, translating immersion in a data-filled virtual space into control modalities. The more natural perceptual interface also reduced the complexity and number of controls in the cockpit. The Super Cockpit thus realized Lickliders vision of man-machine symbiosis by creating a virtual environment in which pilots flew through data. Beginning in 1987, British Aerospace (now part of BAE Systems) also used the HMD as the basis for a similar training simulator, known as the Virtual Cockpit, that incorporated head, hand, and eye tracking, as well as speech recognition.

Sutherland and Furness brought the notion of simulator technology from real-world imagery to virtual worlds that represented abstract models and data. In these systems, visual verisimilitude was less important than immersion and feedback that engaged all the senses in a meaningful way. This approach had important implications for medical and scientific research. Project GROPE, started in 1967 at the University of North Carolina by Frederick Brooks, was particularly noteworthy for the advancements it made possible in the study of molecular biology. Brooks sought to enhance perception and comprehension of the interaction of a drug molecule with its receptor site on a protein by creating a window into the virtual world of molecular docking forces. He combined wire-frame imagery to represent molecules and physical forces with haptic (tactile) feedback mediated through special hand-grip devices to arrange the virtual molecules into a minimum binding energy configuration. Scientists using this system felt their way around the represented forces like flight trainees learning the instruments in a Link cockpit, grasping the physical situations depicted in the virtual world and hypothesizing new drugs based on their manipulations. During the 1990s, Brookss laboratory extended the use of virtual reality to radiology and ultrasound imaging.

Virtual reality was extended to surgery through the technology of telepresence, the use of robotic devices controlled remotely through mediated sensory feedback to perform a task. The foundation for virtual surgery was the expansion during the 1970s and 80s of microsurgery and other less invasive forms of surgery. By the late 1980s, microcameras attached to endoscopic devices relayed images that could be shared among a group of surgeons looking at one or more monitors, often in diverse locations. In the early 1990s, a DARPA initiative funded research to develop telepresence workstations for surgical procedures. This was Sutherlands window into a virtual world, with the added dimension of a level of sensory feedback that could match a surgeons fine motor control and hand-eye coordination. The first telesurgery equipment was developed at SRI International in 1993; the first robotic surgery was performed in 1998 at the Broussais Hospital in Paris.

As virtual worlds became more detailed and immersive, people began to spend time in these spaces for entertainment, aesthetic inspiration, and socializing. Research that conceived of virtual places as fantasy spaces, focusing on the activity of the subject rather than replication of some real environment, was particularly conducive to entertainment. Beginning in 1969, Myron Krueger of the University of Wisconsin created a series of projects on the nature of human creativity in virtual environments, which he later called artificial reality. Much of Kruegers work, especially his VIDEOPLACE system, processed interactions between a participants digitized image and computer-generated graphical objects. VIDEOPLACE could analyze and process the users actions in the real world and translate them into interactions with the systems virtual objects in various preprogrammed ways. Different modes of interaction with names like finger painting and digital drawing suggest the aesthetic dimension of this system. VIDEOPLACE differed in several aspects from training and research simulations. In particular, the system reversed the emphasis from the user perceiving the computers generated world to the computer perceiving the users actions and converting these actions into compositions of objects and space within the virtual world. With the emphasis shifted to responsiveness and interaction, Krueger found that fidelity of representation became less important than the interactions between participants and the rapidity of response to images or other forms of sensory input.

The ability to manipulate virtual objects and not just see them is central to the presentation of compelling virtual worldshence the iconic significance of the data glove in the emergence of VR in commerce and popular culture. Data gloves relay a users hand and finger movements to a VR system, which then translates the wearers gestures into manipulations of virtual objects. The first data glove, developed in 1977 at the University of Illinois for a project funded by the National Endowment for the Arts, was called the Sayre Glove after one of the team members. In 1982 Thomas Zimmerman invented the first optical glove, and in 1983 Gary Grimes at Bell Laboratories constructed the Digital Data Entry Glove, the first glove with sufficient flexibility and tactile and inertial sensors to monitor hand position for a variety of applications, such as providing an alternative to keyboard input for data entry.

Zimmermans glove would have the greatest impact. He had been thinking for years about constructing an interface device for musicians based on the common practice of playing air guitarin particular, a glove capable of tracking hand and finger movements could be used to control instruments such as electronic synthesizers. He patented an optical flex-sensing device (which used light-conducting fibres) in 1982, one year after Grimes patented his glove-based computer interface device. By then, Zimmerman was working at the Atari Research Center in Sunnyvale, California, along with Scott Fisher, Brenda Laurel, and other VR researchers who would be active during the 1980s and beyond. Jaron Lanier, another researcher at Atari, shared Zimmermans interest in electronic music. Beginning in 1983, they worked together on improving the design of the data glove, and in 1985 they left Atari to start up VPL Research; its first commercial product was the VPL DataGlove.

By 1985, Fisher had also left Atari to join NASAs Ames Research Center at Moffett Field, California, as founding director of the Virtual Environment Workstation (VIEW) project. The VIEW project put together a package of objectives that summarized previous work on artificial environments, ranging from creation of multisensory and immersive virtual environment workstations to telepresence and teleoperation applications. Influenced by a range of prior projects that included Sensorama, flight simulators, and arcade rides, and surprised by the expense of the air forces Darth Vader helmets, Fishers group focused on building low-cost, personal simulation environments. While the objective of NASA was to develop telerobotics for automated space stations in future planetary exploration, the group also considered the workstations use for entertainment, scientific, and educational purposes. The VIEW workstation, called the Virtual Visual Environment Display when completed in 1985, established a standard suite of VR technology that included a stereoscopic head-coupled display, head tracker, speech recognition, computer-generated imagery, data glove, and 3-D audio technology.

The VPL DataGlove was brought to market in 1987, and in October of that year it appeared on the cover of Scientific American (see photograph). VPL also spawned a full-body, motion-tracking system called the DataSuit, a head-mounted display called the EyePhone, and a shared VR system for two people called RB2 (Reality Built for Two). VPL declared June 7, 1989, Virtual Reality Day. On that day, both VPL and Autodesk publicly demonstrated the first commercial VR systems. The Autodesk VR CAD (computer-aided design) system was based on VPLs RB2 technology but was scaled down for operation on personal computers. The marketing splash introduced Laniers new term virtual reality as a realization of cyberspace, a concept introduced in science fiction writer William Gibsons Neuromancer in 1984. Lanier, the dreadlocked chief executive officer of VPL, became the public celebrity of the new VR industry, while announcements by Autodesk and VPL let loose a torrent of enthusiasm, speculation, and marketing hype. Soon it seemed that VR was everywhere, from the Mattel/Nintendo PowerGlove (1989) to the HMD in the movie The Lawnmower Man (1992), the Nintendo VirtualBoy game system (1995), and the television series VR5 (1995).

Numerous VR companies were founded in the early 1990s, most of them in Silicon Valley, but by mid-decade most of the energy unleashed by the VPL and Autodesk marketing campaigns had dissipated. The VR configuration that took shape over a span of projects leading from Sutherland to LanierHMD, data gloves, multimodal sensory input, and so forthfailed to have a broad appeal as quickly as the enthusiasts had predicted. Instead, the most visible and successfully marketed products were location-based entertainment systems rather than personal VR systems. These VR arcades and simulators, designed by teams from the game, movie, simulation, and theme park industries, combined the attributes of video games, amusement park rides, and highly immersive storytelling. Perhaps the most important of the early projects was Disneylands Star Tours, an immersive flight simulator ride based on the Star Wars movie series and designed in collaboration with producer George Lucass Industrial Light & Magic. Disney had long built themed rides utilizing advanced technology, such as animatronic charactersnotably in Pirates of the Caribbean, an attraction originally installed at Disneyland in 1967. Star Tours utilized simulated motion and special-effects technology, mixing techniques learned from Hollywood films and military flight simulators with strong story lines and architectural elements that shaped the viewers experience from the moment they entered the waiting line for the attraction. After the opening of Star Tours in 1987, Walt Disney Imagineering embarked on a series of projects to apply interactive technology and immersive environments to ride systems, including 3-D motion-picture photography used in Honey, I Shrunk the Audience (1995), the DisneyQuest indoor interactive theme park (1998), and the multiplayer-gaming virtual world, Toontown Online (2001).

In 1990, Virtual World Entertainment opened the first BattleTech emporium in Chicago. Modeled loosely on the U.S. militarys SIMNET system of networked training simulators, BattleTech centres put players in individual pods, essentially cockpits that served as immersive, interactive consoles for both narrative and competitive game experiences. All the vehicles represented in the game were controlled by other players, each in his own pod and linked to a high-speed network set up for a simultaneous multiplayer experience. The players immersion in the virtual world of the competition resulted from a combination of elements, including a carefully constructed story line, the physical architecture of the arcade space and pod, and the networked virtual environment. During the 1990s, BattleTech centres were constructed in other cities around the world, and the BattleTech franchise also expanded to home electronic games, books, toys, and television.

While the Disney and Virtual World Entertainment projects were the best-known instances of location-based VR entertainments, other important projects included Iwerks Entertainments Turbo Tour and Turboride 3-D motion simulator theatres, first installed in San Francisco in 1992; motion-picture producer Steven Spielbergs Gameworks arcades, the first of which opened in 1997 as a joint project of Universal Studios, Sega Corporation, and Dreamworks SKG; many individual VR arcade rides, beginning with Sega Arcades R360 gyroscope flight simulator, released in 1991; and, finally, Visions of Realitys VR arcades, the spectacular failure of which contributed to the bursting of the investment bubble for VR ventures in the mid-1990s.

Continued here:

Virtual reality | computer science | Britannica.com

Bitcoin and Ethereum: A Look At The Week Ahead

READ LATER - DOWNLOAD THIS POST AS PDF

As most crypto exchanges are having a hard time retaining business, BitMEX is posting record profits that are yet to be declared but will be highlighted below. Looking at the recent analysis of Coinbase, we find that the popular American exchange is having a hard time maintaining a customer base during a bear market. The exchange has experienced a plunge in the volume of 83% since the highs of December and January.

According to recent reports, Coinbases volume in July was around $3.9 Billion in trades as compared to $21 Billion back in January during the crypto bull run.

The only exchange that has partially survived the bear market from the above report is Binance. The exchanges numbers in July stood at $11.3 Billion in July and those of OKEx reaching levels of $2.9 Billion.We can blame the general decline in trade volume on regulatory uncertainty, constant FUD, ETF blues as well as a natural market decline after an impressive rally.

BitMEX was launched in 2014 but did not become popular in the crypto-verse up until around June this year, when traders realized they could make a killing shorting Bitcoin (BTC). The exchange offers perpetual contracts as well as futures contracts. The continuous contracts do not have an expiring date and have a funding rate that occurs every 8 hours. Futures contracts on the platform are settled at the contracts settlement price.

This, in turn, means you can short the digital assets of Bitcoin (BTC), Ethereum (ETH), XRP, Bitcoin Cash (BCH), Cardano (ADA), EOS, Litecoin (LTC) and Tron (TRX) no the platform. This then opens the floodgates for potential market manipulation from someone or an organization that knows what they are doing.

The terms and conditions for using BitMEX indicate that trading is prohibited in a few countries. One country stands out: the United States. Evidence of this can be seen in the terms and conditions for use whichstate that:

that trading access to or holding positions BitMEX is prohibited for any person that is located in or a resident of the United States of America, Qubec (Canada), Cuba, Crimea and Sevastopol, Iran, Syria, North Korea, Sudan, or any other jurisdiction where the services offered by BitMEX are restricted.

Could it be they are avoiding SEC scrutiny and the long arm of the American Law? Perhaps the financial instruments on BitMEX would not hold water with the SEC.

The exchange has most recently rented the most expensive offices in Hong Kong in a move that raises more questions as to how much in profits the exchange is making by offering the unique trading instruments on their platform. The exchange will occupy the 45th floor of theCheung Kong Center. Their average leasing expenses will add up to around $500k per month.

To add to the question as to how much the exchange is making in profits, Ben Delo, a co-founder of BitMEX, was recently been named Britains youngest Bitcoin Billionaire aged at just 34.

In the case of Binance, the exchange has stated that it is eyeing $1 Billion in profits for the year of 2018. Binance does not use the extra leverage instruments on BitMEX, but it has attracted a majority share of global traders. Checking coinmarketcap.com, we find that the daily trade volume of BitMEX is three times that of Binance.

Doing the math from the daily trade volume of both exchanges, BitMEX could be targeting $3 Billion in profits at the end of this year or even more.

Therefore, the question as to whether they are getting the funds to rent the best office space in Hong Kong can now be answered. We can also understand how Ben Delo is a Bitcoin Billionaire.

However, there is still the unanswered question as to why trading in the United States is prohibited and as to whether the exchange has protection measures for their users against market manipulation.

For the latest cryptocurrency news, join ourTelegram!

Disclaimer: This article should not be taken as, and is not intended to provide, investment advice. Global Coin Report and/or its affiliates, employees, writers, and subcontractors are cryptocurrency investors and from time to time may or may not have holdings in some of the coins or tokens they cover. Please conduct your own thorough research before investing in any cryptocurrency and read our fulldisclaimer.

Image courtesy ofEmily Mortervia Unsplash

Visit link:

Bitcoin and Ethereum: A Look At The Week Ahead

Cloud Automation Service – azure.microsoft.com

Automate, configure, and install updates across hybrid environments

Automate all of those frequent, time-consuming, and error-prone cloud management tasks. Azure Automation service helps you focus on work that adds business value. By reducing errors and boosting efficiency, it also helps to lower your operational costs.

Monitor update compliance across Azure, on-premises, and other cloud platforms for Windows and Linux. Schedule deployments to orchestrate the installation of updates within a defined maintenance window.

Author and manage PowerShell configurations, import configuration scripts, and generate node configurationsall in the cloud. Use Azure Configuration Management to monitor and automatically update machine configuration across physical and virtual machines, Windows, or Linuxin the cloud or on-premises.

Get an inventory of operating system resources including installed applications and other configuration items. Use rich reporting and search to quickly find detailed information on everything thats configured within the operating system. Track changes across services, daemons, software, registry, and files to promptly investigate issuesand turn on diagnostics and alerting to monitor for unwanted changes.

Write runbooks graphically in PowerShell or Python to integrate Azure services and other public systems required for deploying, configuring, and managing your end-to-end processes. Orchestrate across on-premises environments using a hybrid runbook worker to deliver on-demand services.

Trigger automation from ITSM, DevOps, and monitoring systems to fulfill requests and ensure continuous delivery and management.

Rely on serverless runbooks to automatically grow as your operational tasks increase. Deliver services more quickly and consistently by focusing on adding business rather than maintaining the management system.

Related products and services

Collect, search, and visualize machine data from on-premises and cloud

Simple and reliable server backup to the cloud

Start your free account with Automation

See original here:

Cloud Automation Service - azure.microsoft.com

Cryptocurrency: Virtual money, real power, and the fight for …

Driving into the small town of Wenatchee, Washington, about three hours east of Seattle, a sign welcomes you to the "Apple Capital of the World." But not far from the abundant orchards, a very different industry is taking root. As unlikely as it may seem, this rural community has become a hub for cryptocurrency mining.

"Cryptocurrency justified the expense to build something that no one would otherwise build," said entrepreneur David Carlson, as rows upon rows of computer servers whirred away at the facility he set up here. "These things can run 24/7 making cryptocurrency."

He has big plans for his business, even if some Wenatchee residents don't like it.

"We want to grow ten times larger than we are now, and we can do it here, or we can do it somewhere, but we're going to do it," Carlson said.

Bitcoin is the best known, but it's just one of many digital forms of currency. These cryptocurrencies are decentralized; rather than being processed through banks, transactions are verified and recorded by individual users. Encrypted technology called blockchain keeps the transactions secure.

Bitcoin hit a highof over $19,700 in December 2017, though it's worth much less, about $6,300, today. Despite the volatility, rising values have fueled a whole new industry and legions of enthusiasts. At a recent cryptocurrency conference in Atlantic City, thousands gathered to explore new ideas and opportunities in the field.

"So I live off of bitcoin," said Kenn Bosak, who hosts "Pure Blockchain Wealth" on YouTube. "It pays my rent. I book my flights with cheapair.com. They accept bitcoin, Dash, all kinds of cryptocurrencies. I book my rooms with BitPay with my Visa card. My Lyft drivers accepts BitPay, that's bitcoin. So I'm all in. I use bitcoin in every aspect of my life."

Unlike dollars or other conventional currencies, cryptocurrency like bitcoin isn't issued by a government. It's created through a process called mining, which is leading to a virtual gold rush around the world.

Every time someone uses cryptocurrency to pay for something, it sets off a flurry of invisible activity. Computer servers, which can be located anywhere in the world, work to verify and process the transaction, racing to authenticate the exchange of digital money through complex transactions.

For doing this work, the machines (and their owners) are rewarded with new cryptocurrency. With a sufficient number of powerful computers, it can be a lucrative business.

That's what David Carlson's company, Giga Watt, is busy doing at his facility in Wenatchee. He started with just a few small machines, but with the help of investors, he's scaled up significantly. Now his rooms full of computer servers work feverishly to mine cryptocurrency around the clock.

David Carlson shows CBS News' Errol Barnett his cryptocurrency mining operation in Wenatchee, Washington.

CBS News

Each of the small machines makes roughly $1,500 worth of bitcoin every year, though the amount of profit fluctuates every day. As Carlson showed CBS News correspondent Errol Barnett around, the site hummed with the sound of giant industrial cooling fans.

"Every one of these things is like a thousand-watt hair dryer. So there will be a thousand of those hairdryers in this spot. So that's quite a lot of heat. Don't try it at home," Carlson said.

"This entire wall is the future, according to you," Barnett said.

"Yeah. The future of money right here," Carlson replied.

He plans to have 22 of his pods completed by the end of the year and all that computer power sucks up a huge amount of electricity.

"Our pods use one and a half megawatts, which is typically associated with, like, 600 homes," he said.

Powering his operation would cost a fortune most places, but Wenatchee has a competitive advantage: the Columbia River. Dams on the river generate cheap hydroelectric power, which has drawn crypto mining enthusiasts to this corner of the country.

Steve Wright, the general manager of the Chelan County public utility, says it has long been an economic engine for the region. "What we have seen more recently are industries like cryptocurrency that have come to the region for the same reasons that aluminum came here. Low-cost, reliable electricity," he said.

Dams on the Columbia River provide cheap power to the Pacific Northwest.

CBS News

But even here, there are limits.

"We have requests for service that would double the usage here in the county, and we're trying to figure out, you know, how are we going to deal with that, and what the implications would be for the people who live here," Wright said.

Because access to cheap power is key, crypto miners are racing to set up shop anywhere in the world they can find low rates. Cold climates are also preferred, to help reduce cooling costs.

But this tech boom is not without problems. Among the issues: the droning noise of all that equipment. The hum reverberates far beyond the walls. And some of the operations have sprung up in a decidedly makeshift fashion.

"Would you want to live next to one of these?" said Andrew Wendell, customer service director for the utility. "Not just the aesthetics, but also the noise. There's a lot of noise. They really do belong in an industrial setting."

He continues, "And it gives us a bit of a concern, because, quite literally, you could have a tractor trailer come in and load this thing up and move it out, literally overnight. And so it just begs the question, from a utility who is providing and building the infrastructure to support these, how long is our investment? When we build those, we are building for 40, 50, 60 years. This doesn't look like that long term."

Not only is he worried about miners abandoning Wenatchee and leaving behind expensive new power connections there are also safety concerns. Some mining setups push the infrastructure to the breaking point.

Industrial fans are needed to cool the rows of supercomputers that mine for cryptocurrency.

CBS News

Wendell shows us an example. "What we have here is a standard residential home, but this shed, about 10 by 10 here, off to the side with the fence, that's full of cryptocurrency mining operations."

He holds up the remnants of a frayed and melted underground electrical cable.

"This plastic insulation breaks down because there's so much heat?" Barnett asks.

"There's so much heat. It can't dissipate the heat. So the insulation breaks down, and then the cables go phase to phase. And when they go phase to phase, they combust. They arc and they can start a fire. And that's what happened [here], is a fire started," Wendell said. "The bottom line is, is that when you mix the cryptocurrency mining with traditional residential load, if you don't have things built and designed appropriately, you're going to have some problems."

He adds, "In this part of the country, a wildfire can spread and burn literally hundreds of homes. So we take that very seriously."

While some in Wenatchee are excited about the economic potential of cryptocurrency mining, many others are concerned about its massive power consumption and other risks.

"Nobody wants a fire, you know, like their apartment complex burning down, because someone is mining bitcoin," one resident said.

Some admit they don't fully understand it. "It's just going to drain our power, and that's really all I know," a local woman told us.

Facing overwhelming demand for power from cryptocurrency miners and increasing concern from the community, the utility placed a moratorium on new mining requests until they could agree on a solution. Local miners were not happy.

"They went overboard with their moratorium. It was kinda crazy for 'em to say, 'No, you can't do that. We're we're shutting everything down in the in the entire county,'" said Matt McColm. He was planning to set up a mining operation in his insurance office to generate some extra money for his 12-year-old son's college fund. He'd already ordered the equipment on Amazon. But now he'll have to move it all to a site a few hours away in Oregon instead.

"What you've got is is you've got is several large players that kind of salted the earth for everyone else. They're literally consuming large sections of our town and edging out the small ones," McColm said. "It's kinda rough, because I'd rather develop here locally... and put the money here in Wenatchee."

Earlier this month, the utility held a public hearing for input on the moratorium and the future of cryptocurrency mining in Wenatchee.

Some locals stood up to voice complaints about what the industry is doing to their town. "I read a lot about what bitcoin operators want, and what bitcoin is doing for them. I'm not hearing that it's doing anything much for us. This is a take, take, take, not a give," one woman said.

Others made the case to encourage business development, like the man who said, "I'd ask you guys to consider the very small operations that are existing right here in town. A large rate increase would drastically affect our business, putting some of us out."

Much of the concern about cryptocurrency mining is its volatility. With prices soaring one day and crashing the next, many worry the entire market could collapse. But advocates say they are missing the big picture a growing industry that's about more than just mining.

Malachi Salcido, another large-scale miner, says the rise of supercomputing, using specialized hardware and cheap power, can also enable things like artificial intelligence.

"And so it helps you to understand why in the world would you build a 30, 40-year asset for something that's only nine years old? I didn't. I built it for a new technology that will have many current and future iterations that we don't yet fully understand," he said.

He believes his investment will pay off, even if cryptocurrency fizzles.

"The demand internationally for power and networking for computing space is rising so rapidly that I'm very comfortable there will be demand for our location, even if crypto doesn't become the market it could."

Salcido, a Wenatchee native, wants to see his hometown benefit from the new industry. But for now, he must expand elsewhere.

"Our strategic goal is 500 megawatts within the next 5 years, and 5 to 10 percent of the global network. We are currently negotiating developments in northern Idaho, northern Oregon, and northern central California. Our choice is whether or not they happen here," he said.

A moratorium may stem the flood of miners arriving in Wenatchee, but it won't stop them from seeking out cheap power wherever they can find it.

In June, a cryptocurrency mining company called Coinmint took over a massive former Alcoa aluminum plant near the small town of Massena, in upstate New York. Coinmint is investing $700 million to turn it into a bitcoin mining behemoth. Once complete, it could be the largest in the world.

Aerial view of a former Alcoa aluminum plant near Massena, in update New York, which is being turned into a massive bitcoin mining facility.

CBS News

Back in Wenatchee, the only question for Dave Carlson is not whether to grow, but where.

"Cryptocurrency justified the expense to build something that no one would otherwise build," he said. "Supercomputing, A.I. can be the new export."

"So you're confident that you will grow, you're just concerned that it will be elsewhere because Wenatchee blinked at a critical moment?" Barnett asked.

Carlson agreed. "That's exactly right."

Originally posted here:

Cryptocurrency: Virtual money, real power, and the fight for ...

Cryptocurrency "miners," utilities look for ways to get along …

Electric producers aren't sure whether cryptocurrency "miners" are friend or foe.

The miners, who use powerful computers to generate bitcoin, ethereum and other cryptocurrencies by solving complex computational problems, are power hogs that can bring new sources of revenue for energy producers. But that revenue generally comes at a price: millions of dollars of investment in new power stations and lines.

For their part, utilities hesitate to commit those funds for fear the bottom will fall out of the cryptocurrency market, leaving them stuck with the bill for facilities no longer in use.

"Getting power companies to take cryptocurrency mining seriously has been a struggle," said JohnPaul Baric, chief executive of the MiningStore, which makes cryptocurrency mining technology. "Mining is still in its early days, and power companies say they aren't sure of its longevity."

It's not as if the power companies don't want the additional revenue. But in the case of Grant County in Washington State, more than 100 cryptocurrency miners are requesting power. Combined, they are asking for 1,700 megawatts of new power -- that's the equivalent of two nuclear power plants, or 1.5 times the power needs of the city of Seattle. Grant PUD's average electric load is about 600 megawatts.

"We, like any other utility, aren't set up to handle that kind of new demand," said Kevin Nordt, general manager for the Grant County Public Utility District, known as Grant PUD. "Trying to get that kind of infrastructure built would take many, many years and require millions if not billions of dollars in investment. There's a lot of risk involved because it's an nascent industry with a lot of unknown variables."

Cryptocurrency miners use large numbers of computer servers which use massive amounts of electricity -- to solve complex mathematical puzzles needed to create virtual currencies like bitcoin and ethereum. Bitcoin miners alone use more power than the entire country of Ireland. There are more than 2,000 different types of cryptocurrencies.

Grant PUD's popularity with cryptocurrency miners stems in part from its low price for electricity generated from power plants on the Columbia River, Baric said. Electrical expenses are often the highest costs for cryptocurrency miners.

"We are the most power-intensive business ever we use crazy amounts of power," Baric said. "Electricity costs matter."

The average cost of electricity in the U.S. is about 12 cents per kilowatt-hour. But Grant PUD sells its electricity for only 1 to 2 cents a kilowatt-hour, Baric said. Grant PUD is a nonprofit, community-owned hydropower utility based in Moses Lake, Washington, about three hours southwest of Seattle. Its power generation facilities cover 2,800 square miles.

Because of the intense demand, Grant PUD temporarily stopped accepting cryptocurrency mining customers so that it could develop new policies around the industry. The PUD decided to create a new customer class called the "evolving industry" class. The class wasn't meant only for cryptocurrency, but for any other radical, disruptive type technology that may take shape in the future, Nordt said.

The evolving industry class would price in the risk associated with creating new infrastructure for an industry without a long track record, he said.

"We needed to look at this differently," he said. "We don't know how regulatory and other issues are going to break for mining."

Miners, in the meantime, are also suggesting ways they can be of benefit to utilities. One example is for miners to use a utility's "peak load" capabilities that often sit idle. Most utilities build their facilities so they have capacity even for those very hot days in July and August, when everyone is running their air conditioners.

The miners could use that unused peak load capabilities throughout the year and stop mining when the utility needed the extra electricity on those hot summer days. Baric sells products that would automatically shut down the mining operations when the peak load was used.

"The actual physical mining units would just sit there idle and the staff would have the day off," Baric said. "The miners would know for those four or five hours on that hot July day, they will be disconnected."

Miners are also happy to take extra, unused electricity off the hands of producers, Baric said. Utilities inevitably create more energy than they use and generally allow that power to be burned off. Miners are instead willing to buy that access energy which is a benefit to producers, he said.

"Years ago people wondered if the internet would stay around, but suggesting that today would seems silly," Baric said. "That's the way it is with cryptocurrency; it's brand knew and people don't yet understand it yet. But it's here to stay."

View post:

Cryptocurrency "miners," utilities look for ways to get along ...

Which Type of Bankruptcy Should You File? Chapter 7 vs. 13 …

Once you've decided that bankruptcy is the right solution for your financial situation, you will need to decide which type of bankruptcy is most beneficial.

If you are an individual or a small business owner, then your most obvious choices are Chapter 7 "liquidation" bankruptcy or Chapter 13 "wage earners" or "reorganization" bankruptcy.

We'll go over the pros and cons of each, the eligibility rules, and give you some information to help decide which would be best for you given your financial situation.

There are a select few other types of bankruptcies that are available under certain circumstances, and we will touch on those as well.

To get started, here's a look at the highlights of both Chapter 7 and Chapter 13 bankruptcy:

See: How to File for Chapter 7 Bankruptcy

See: How to File for Chapter 13 Bankruptcy

Chapter 11 and Chapter 12 are similar to the Chapter 13 repayment bankruptcy, but designed for specific debtors.

Chapter 11 bankruptcy is another form of reorganization bankruptcy that is most often used by large businesses and corporations. Individuals can use Chapter 11 too, but it rarely makes sense for them to do so.

Chapter 12 bankruptcy is designed for farmers and fisherman. Chapter 12 repayment plans can be more flexible those in Chapter 13. In addition, Chapter 12 has higher debt limits and more options for lien stripping and cramdowns on unsecured portions of secured loans.

In many cases, the type of bankruptcy filed will be contingent on two things: Your income and your assets. Your income is important because it may preclude you from filing a simple Chapter 7 case, and your assets are important because if you have nonexempt property, you might lose it in Chapter 7, but can protect it in Chapter 13.

Here are a few scenarios that explore which bankruptcy strategy would be best:

Loss of income combined with a large amount of debt is the number one reason people file for bankruptcy. Compounding factors like divorce, medical emergencies, or the death of a family member are also common. Assume that in this scenario the debtor has no income other than unemployment benefits, does not own a home, and has one car with a loan against it.

In cases like this, a Chapter 7 bankruptcy is the fastest, easiest, and most effective means of getting rid of debt. As a matter of fact, this is the most common bankruptcy case, often called a"no asset" bankruptcy.

Homeowners who are experiencing a loss of income also have options under bankruptcy law. For those homeowners whose property value has fallen below the value of the loan against it, Chapter 7 is probably still the best option. Since the value of the home is less than the value of the lien against it, the homeowner has no equity in the bankruptcy estate, so the house is protected from liquidation. A Chapter 7 bankruptcy can quickly relieve them of their obligations to repay unsecured debts, making monthly bills much more manageable.

If a homeowner has a significant amount of equity in property, then Chapter 7 may or may not be the best option. If the homeowner's state exempts a generous amount of home equity, then the home may be safe. But if the state homestead exemption doesn't cover the equity, the homeowner may lose the home in a Chapter 7 bankruptcy. The homeowner can keep the home in Chapter 13 bankruptcy if he or she keeps current on the mortgage. Keep in mind though, there must be enough income available from the petitioning household to fund a repayment plan.

For homeowners who have fallen behind on mortgage payments, Chapter 13 offers a way to catch up or "cure" past due mortgage payments while simultaneously eliminating some portion of dischargeable debt. This means they can save the home from foreclosure and get rid of a lot of credit card debt, medical debt, and possibly even second and third mortgages or HELOCs. Chapter 7 bankruptcy does not provide a way for homeowners to make up mortgage arrears.

Very wealthy debtors often need to file under Chapter 11 due to the debt and income limits of Chapter 7 and Chapter 13 bankruptcies.

Go here to see the original:

Which Type of Bankruptcy Should You File? Chapter 7 vs. 13 ...

The Cost Of Bankruptcy – Debt.org

How Much Does Bankruptcy Cost?

How much does it cost to file bankruptcy? Sadly, there is no easy answer. Though the expense of filing a petition to the court is fixed, what youll pay an attorney and how youll make the payments can vary widely, depending on who you hire, where you live and the complexity of your case.

Attorneys fees differ from case to case, judicial district to judicial district and state to state. Where you live can make a substantial difference in what you pay, but an even bigger factor is the complexity of your case. Like everyone, lawyers want to be paid for their time, and the more time your case takes to resolve, the more it will cost.

An American Bankruptcy Institute study using data from 2005 to 2009 revealed that the average national average cost was $1,072 for Chapter 7 cases with assets. The cost depends on where the case is filed.Chapter 7 fees ranged from a low of $781 to a high of $1,530 in Arizona.

The ABI study showed an average of $2564 for Chapter 13 cases, with ranges from from $1,560 in North Dakota to a high of $4,950 in Maine.

Myriad circumstances can add to the cost of a simple bankruptcy filing. Attorneys will charge more as the complexities grow, particularly if they require court appearances.

Attorneys almost always demand payment before service in Chapter 7 cases. They will often offer payment plans, but they wont proceed with your case until your fees are paid. That leaves you vulnerable to creditors trying to collect your debts while you try to raise money for the lawyer.

Those are just averages, and fees have likely increased since the survey was conducted. In Chapter 13 cases, judges will review attorneys fees unless they fall below a so-called no-look amount, which is a baseline considered reasonable in the jurisdiction where the case is filed. But in general, its a good idea to call or meet with several attorneys before choosing one to represent you. Bankruptcy-attorney fees are public record and can be accessed through the searchable federal PACER website. Though PACER charges a small fee for downloaded information, it can be money well spent.

The cost of living where you file will also impact what you pay. Lawyers in large metropolitan areas, like everyone else, have bigger expenses than those in more rural settings. The higher cost tends to raise all professional costs, and bankruptcy representation is no exception. Also, not all lawyers were created equal. Those with many successful years in the bankruptcy field will almost certainly demand larger fees than those with little experience.

It is a good idea to consider the complexity of your case when picking a lawyer. If you have few assets and not many debts, your simple case might not demand the sort of representation that someone with a diverse source of income, a fat folder of creditors and perhaps a suspicion of fraud, might need. In other words, not all bankruptcies are the same. Remember that mulling the sort of lawyer you might need.

Those with complicated cases might benefit from an experienced bankruptcy lawyer. If creditors challenge your financial statements and allege fraud, having an attorney able to navigate a complex case would benefit you. The same would be true for cases springing from medical debt, a fairly common culprit in bankruptcy filings.

One small fee that you mustnt forget covers credit counseling. Completion of two credit counseling courses is required for petitioners in both Chapter 7 and Chapter 13 cases. You must consult a nonprofit credit counseling agency to arrange to take the course. The Office of the U.S. Trustee, the federal agency that oversees the counseling requirement, sets reasonable fees for such courses at free to $50. The course can be taken in person or online.

Although everyone who files for bankruptcy protection has unmanageable debts, some applicants are worse off than others. Be sure to fully document your financial situation before consulting a bankruptcy attorney. If you are unemployed, a low-wage earner, disabled or elderly, you might be able to get a fee reduction.

Bankruptcy is a hard step to take, and recovering from it isnt easy. Though a successful Chapter 7 petition will discharge your debts, it will remain on your credit report for as long as 10 years, affecting your ability to borrow. A Chapter 13 resolution might not be as damaging, but it will require that you stick to a repayment plan for three to five years, even if the court reduces your debts.

Given the consequences, discussing a disability or your advanced years with an attorney can help. Obviously, if there are impediments to rebuilding your finances after bankruptcy, that is relevant and an attorney might be willing to reduce fees to mitigate the damage bankruptcy is certain to cause.

In most instances, bankruptcy attorneys charge a flat fee, meaning they will tell you before starting work on your case what it will cost. In Chapter 7 cases, theyll want the money up front; in Chapter 13, they often demand just a portion of the fee to start the case, and will take the remainder through the court-approved bankruptcy settlement plan.

If legal representation costs more than you can afford, you might consider representing yourself and either file the paperwork on your own or seek help from a bankruptcy petition preparer. Petition preparers, also known as typing services or paralegals, are non-lawyers who will generate the necessary court filings. Unlike lawyers, petition preparers cant offer you legal advice, nor can they guide you in deciding which type of bankruptcy to file or what property and assets to include or exclude from your filing. They primarily offer a clerical service that leaves the decision making to you.

Since many legal forms are available online, petition preparers might have little to offer since they wont guide you through the process or offer legal representation. If you dont have internet access, they might be valuable, but you should understand their limitations before using their services.

Before deciding to handle your own bankruptcy without a lawyer, consider the consequences. The chances of running into trouble that might result in your case being dismissed are considerably greater if you dont use an attorney.

Filing for bankruptcy will cost you even though youre in no position to pay. Yes, in perhaps the ultimate Catch-22, youll need money to let your creditors know you dont have any.

Though covering the cost of bankruptcy might not be the largest problem on your agenda, it is an issue. Most bankruptcy petitions require some form of legal help, and the more complicated the filing, the more help youll need. That means hiring a lawyer, and unless you know one who works for free, it will require money.

Legal fees are the biggest headache, but not the only one. Youll also have to pay court costs and a fee for mandatory credit counseling. The combined bill could run into the thousands of dollars, so before you load up your briefcase and head for the courthouse, you need to know what you need to do, how much it will cost and where youll find the money.

Originally posted here:

The Cost Of Bankruptcy - Debt.org