A New Bank of America Patent Describes Blockchain-Powered ATMs

The idea is part of an ironic trend: banks want to leverage a technology that was invented to take power away from financial institutions.

Blockchain-Powered ATMs

Bank of America customers could soon use automatic teller machines, better known as ATMs, that are powered by a blockchain ledger. The company filed a new U.S. patent for the system that was published online Tuesday.

According to the patent, Bank of America could use blockchain tech to verify and track ATM cash transactions and improve ATM performance. The move highlights the tension and irony of the contemporary blockchain ecosystem. BofA, fearful of being left behind by new financial innovation, now holds more than 50 patents for blockchain technologies, according to Coindesk. That could be a sign that it’s positioning itself for a serious push into the same decentralized technology that was originally designed to take power away from big banks.

Decentralized Hub

The idea behind the patent, which was first filed in June 2017, is that tracking cash transactions on the blockchain could help plan and predict which ATMs need cash when — the bank would be able to cut down on how costs related to transporting the physical cash, according to Coindesk.

Bank of America also got the rights to new “ATM as a service” platforms, which the patent describes as a way to boost engagement with ATM services like video calls and integration into local marketing campaigns or pop-up stores. These features are seemingly intended to make cash withdrawals fun and trendy.

Full Circle

Two months after the blockchain ATM patent was filed, Bank of America and Wells Fargo cut down on cash-related costs by banning third-party cash deposits. This move away from tangible money recently resurfaced and drew ire by would-be gift-givers over the holiday season.

If this blockchain patent is any indication, we can expect more changes in the future that make things smoother for BofA — even if we get fewer options as a result.

READ MORE: Bank of America Files for Blockchain ‘ATM as a Service’ Patent [Coindesk]

More on blockchain finance: Bank of America Wins Patent for Crypto Exchange System

The post A New Bank of America Patent Describes Blockchain-Powered ATMs appeared first on Futurism.

Read the rest here:
A New Bank of America Patent Describes Blockchain-Powered ATMs

An Ultrafast Camera Filmed Electrons Interacting With Light Energy

A researcher stands at an ultrafast electron camera that recorded motion in electrons.

Now You See Me

We just got a bold new look at what happens when light interacts with electrons.

When converting light into electricity, like in solar cells, much of the energy isn’t converted into electricity. When light hits an object it stimulates electrons in a process that’s over in only a few femtoseconds — that’s one quadrillionth of a second. Better understanding the process could lead to new types of advanced electronic devices or improved solar cells.

Because the process occurs super fast we can’t see it happening. Even with the help of modern technology the process proved impossible to record, until now. Researchers Germany’s Kiel University (CAU) used one of the world’s fastest cameras to film the motion of electrons. The research team described their research in the journal Physical Review Letters.

Light In Phases

In their experiment the researchers fired quick light pulses at graphite, chosen for its simple electronic structure, and recorded the movement of the electrons.

“Thanks to the extremely short duration of the light pulses used, we are able to film ultrafast processes live. Our investigations have shown that there is a surprising amount of stuff happening here,” explained Michael Bauer, professor of ultrafast dynamics at CAU.

Based on their film, the team described three distinct phases. First the electrons absorbed the light energy in the graphite, then the energy was distributed to other electrons, before being passed to other atoms. In this final stage the energy is converted into heat, in short, the graphite warmed up.

A New Angle

Although this process had previously been theorized, it’s the first time it has been observed and recorded. New technological capabilities have allowed this research to be conducted on a time-scale we’ve never been able to work on before. By better understanding how electrons behave we can optimize technologies that make use of light and electricity, opening previously unexplored avenues of research.

READ MORE: One of the world’s fastest cameras films motion of electrons [EurkAlert] 

More on UltraFast Cameras: The World’s Fastest Camera Can “Freeze Time,” Show Beams of Light in Slo-Mo

The post An Ultrafast Camera Filmed Electrons Interacting With Light Energy appeared first on Futurism.

Here is the original post:
An Ultrafast Camera Filmed Electrons Interacting With Light Energy

Get to Know the Large Hadron Collider and Take a Glimpse at Its Future

large hadron collider lhc standard model of physics

LHC in VR

The Large Hadron Collider is taking a two year break to undergo vital upgrades that will empower the next phase of groundbreaking research.

Built between 1998 and 2008, the Large Hadron Collider (LHC) is both the most powerful particle accelerator and the largest machine in the world. Situated underground between the border of France and Sweden, the LHC has been responsible for some of the most vital research in particle physics in modern history. A recent feature by The New York Times highlights the history of the record-holding project and offers a stunning virtual tour of the massive machine.

By The Numbers

In order to get a look at some of the most basic building blocks of the universe, we have to smash what’s there into even smaller bits and pieces. The LHC does this using a 17-mile electromagnetic track where magnets which are one hundred thousand times as strong as the Earth’s magnetic field fling particles into one another 600,000 times a second. It’s a feat of engineering that requires 12,000 amperes of electrical current (a typical household outlet is rated at 15 to 20 amperes.)

Particle collisions within the LHC are quite common, occurring at 40 million times per second. Still, very few collisions produce noteworthy results, in fact that’s how the LHC operates. Before any particles are fired computer predict the expected results of any collisions. As results are gathered they are compared to these predictions and only those with unexpected results are returned to researchers saving immense amounts of data processing time. This is how data from the LHC confirmed the existence of a then theoretical Higgs Boson particle which appears in only one of every 10 billion collisions.

What Comes Next?

Presently engineers are improving a series of smaller tracks that are responsible for speeding up protons before they enter the main collider. The upgrades should be completed in 2021, after which the LHC will run for two more years until its next shutdown in 2024. Next new magnets will be installed, allowing even more intense collisions to take place. At this point the machine will be known as the High Luminosity L.H.C and is expected to continue contributing to research efforts until 2035.

READ MORE: Augmented Reality: It’s Intermission for the Large Hadron Collider [The New York Times]

More on the LHC Shutdown: The Large Hadron Collider Just Shut Down

The post Get to Know the Large Hadron Collider and Take a Glimpse at Its Future appeared first on Futurism.

Read the original:
Get to Know the Large Hadron Collider and Take a Glimpse at Its Future

SpaceX Launches Next-Generation Air Force GPS Satellite

SpaceX launch - SpaceX Falcon 9 rocket lifts off carrying a GPS 3 satellite

Liftoff!

After a week of delays, SpaceX successfully launched a Falcon 9 rocket on Sunday morning at 8:51 a.m. EST, it’s 21st launch of the year.

Carried to orbit atop the Falcon 9 rocket was a GPS III SV01 advanced GPS satellite, part of a network of next-generation satellites being installed in orbit by the U.S. Air Force.  The mission, originally awarded to SpaceX in April, 2016, is considered a National Security Space mission, critical to national defense and is SpaceX’s first such mission.

Ground Control to Major Tom

The new GPS III SV01 satellite is part of a planned series of upgrades to the U.S. GPS network. Currently the Air Force maintains 31 GPS satellites, the first iteration of which launched between 1990 and 1997, while the most recent was launched in 2016. The GPS III SV01, with its state of the art technology, is the first of the next generation of satellites with more planned launches in 2019.

GPS III SV01, nicknamed Vespucci, in honor of Italian cartographer and explorer Amerigo Vespucci, will enable the Air Force to provide positioning, navigation, and timing information three times more accurate than that of data provided by other satellites in the GPS network. The information will help everyone from soldiers in the field to those trying navigate a new town.

Farewell and New Beginnings

Normally SpaceX attempts to land Falcon 9 first stages once they’ve separated from the rocket’s second stage. This time the weight and high altitude orbit of the payload meant most of the Falcon 9’s fuel would be expended during launch, leaving too little left to recover the rocket.

Despite not recovering the Falcon 9, SpaceX’s successful 21st launch smashed the company’s previous record of 18 launches in one year and having completed a mission deemed critical to national security is a sort of badge of honor to the pioneering rocket company. SpaceX will take on four additional GPS III missions, all of which will be launched on Falcon 9 rockets later in 2019.

READ MORE: SpaceX Launches Super-Accurate Next-Gen GPS Satellite for US Air Force [Space]

More on SpaceX: SpaceX Smashed the Record for Commercial Space Launches This Year

The post SpaceX Launches Next-Generation Air Force GPS Satellite appeared first on Futurism.

The rest is here:
SpaceX Launches Next-Generation Air Force GPS Satellite

Why Sea Levels Along The US East Coast Are Rising at Different Rates

Sea levels are rising faster in some areas than others, flooding coastal towns.

Water, Water Everywhere

Rising sea levels are concerning. Sea levels rising at different rates? That’s very concerning.

As glacial meltwater continues to cause rising sea levels, monitoring areas at risk of flooding will become increasingly important. In some places sea levels appear to be rising faster than others, making it tricky to predict which coastal cities might be most vulnerable. New research published in the journal Nature supports an explanation that is quite literally, epic.

Funky Flow

It would be logical to think that sea levels would rise more or less uniformly in similar geographical areas, along the U.S. East Coast for example. The trouble is that as sea levels rise, coastlines in these areas may be simultaneously sinking. The new study, which examined sea-level trends along the U.S. East Coast between 1900 and 2017, pins responsibility for unexpected sea level rises on a phenomenon called “post-glacial rebound.”

Post-glacial rebound is a process which began during the last ice age when massive ice sheets covered inland areas, tightly compressing the Earth’s land and causing it to sink downward. Because of this the outlying areas along the coast where pushed upwards over time, like a massive seesaw.

By monitoring comparing tidal gauge measurements, GPS data, and fossil records from coastal areas, the research team behind the study was able to create a more accurate model historic rates of sea level rise.

Chris Piecuch, lead author of the study concluded that, “Post-glacial rebound is definitely the most important process causing spatial differences in sea level rise on the U.S. East Coast over the last century. And since that process plays out over millennia, we’re confident projecting its influence centuries into the future.”

One Drop At A Time

The forces involved in post-glacial rebound play out on an epic time scale impossible for one person to witness, so don’t expect the ground to drop out from underneath you. However, as sea levels continue to rise and the land continues to decompress, some coastal areas will become increasingly prone to coastal flooding. Thankfully, Piecuch and team may have provided a key to determining which areas are most at risk.

READ MORE: Why is sea level rising faster in some places along the US East Coast than others? [ScienceDaily]

More on Sea Level Rise: The World’s Coasts May Be Drowning Under Rising Seas Faster Than We Thought

The post Why Sea Levels Along The US East Coast Are Rising at Different Rates appeared first on Futurism.

Read the original:
Why Sea Levels Along The US East Coast Are Rising at Different Rates

Startup Claims Its Underwear Stay Odor-Free Through Weeks of Wear

Startup Organic Basics claims its silver-coated underwear remain odor-free after weeks of wear, but several testers disagree.

Under Where?

Want to wear the same pair of underwear for weeks at a time? Go right ahead.

A Danish startup called Organic Basics claims its underwear remain fresh through weeks of wear, eliminating the need for frequent washing. And this could be a boon for the environment — if it’s actually true.

Silver Skivvies

When your sweat meets your clothing, it creates an ideal environment for bacteria. It’s this bacteria that actually produces a foul-smelling odor. Silver is antimicrobial, meaning it kills bacteria and other microorganisms.

By treating their underwear with Polygiene, a product that uses silver chloride to control smells, Organic Basics says it can prevent the growth of 99.9 percent of this bacteria, which it claims prevents the underwear from smelling bad as quickly.

“It works,” CEO Mads Fibiger told Business Insider Nordic in May. “You can wear our underwear much longer before washing.”

Smell Test

Fibiger might claim the coating “works,” but not everyone agrees.

A reporter for New York magazine claimed she noticed a “less-than-fresh scent” on just the second day wearing Organic Basics’s women’s briefs, noting that she “didn’t feel comfortable pushing [her] luck with a third day of testing.” Her male colleague also tossed his Organic Basics boxer briefs in the laundry hamper after just 48 hours.

Even if the underwear did maintain the desired level of freshness, though, people might not be able get over the mental hurdle of wearing the same undergarments for weeks at a time — just this week, Elle reporter R. Eric Thomas wrote that reading about the undies made him want to “bleach [his] eyes.”

Futuristic Fashion

Organic Basics isn’t just trying to help people avoid laundry day, though. “The traditional way of buying, wearing, washing, and throwing away overpriced underwear is…extremely harmful to the environment,” Fibiger told Business Insider.

And he’s right. Washing and drying clothing requires water and energy, so the more often you clean your underwear, the greater the garment’s impact on the environment.

Still, the environmental benefits of wearing the same pair of underwear for weeks at a time might not be enough to get even the most environmentally conscious among us to wear Organic Basics’s underwear if they don’t actually smell fine on day three and beyond.

READ MORE: A Danish Startup Invented Underwear You Can Wear for Weeks Without Washing [Business Insider Nordic]

More on sustainable fashion: These Clothes Grow With Your Child and Are a Step Towards Sustainable Fashion

The post Startup Claims Its Underwear Stay Odor-Free Through Weeks of Wear appeared first on Futurism.

Continued here:
Startup Claims Its Underwear Stay Odor-Free Through Weeks of Wear

Microorganisms That Eat Seaweed Can Create Biodegradable Plastic

bioplastic

Ocean of Opportunity

Earth’s oceans contain tens of millions of tons of plastic pollution. But a new technique that creates biodegradable plastics out of seaweed could finally give the oceans relief.

Bioplastics are plastics manufactured from biomass sources instead of fossil fuels. Many degrade far more quickly than traditional plastics, but creating them typically requires fertile soil and fresh water, which aren’t available everywhere.

Now, researchers have found a way to create a bioplastic using seaweed, a far more accessible resource — a promising new approach that could both reduce strain on the plastic-clogged oceans and reduce the Earth’s dependence on fossil fuels.

Scarfing Seaweed

Researchers from the University of Tel Aviv describe their new bioplastic production process in a study published recently in the journal Bioresource Technology.

Certain microorganisms naturally produce a polymer called polyhydroxyalkanoate (PHA). Some factories already create plastics from PHA, but they do so using microorganisms that feed on plants that grow on land using fresh water.

Through their experiments, the team found it was possible to derive PHA from Haloferax mediterranei, a microorganism that feeds on seaweed.

“We have proved it is possible to produce bioplastic completely based on marine resources in a process that is friendly both to the environment and to its residents,” researcher Alexander Golberg said in a press release.

Plastic Problem

Every year, 8 million metric tons of plastic finds its way into the Earth’s oceans, and researchers estimate that plastic will outweigh fish by 2050. That plastic is killing marine life, destroying coral reefs, and even affecting human health.

Efforts are already underway to remove plastic from the ocean, and several governments are banning certain plastics altogether. But plastic pollution is a huge problem that will require a multi-pronged solution — and a biodegradable plastic could be one of those prongs.

READ MORE: Sustainable “Plastics” Are on the Horizon [Tel Aviv University]

More on plastic pollution: The EU Just Voted to Completely Ban Single-Use Plastics

The post Microorganisms That Eat Seaweed Can Create Biodegradable Plastic appeared first on Futurism.

More:
Microorganisms That Eat Seaweed Can Create Biodegradable Plastic

Apollo Astronaut: It Would Be “Stupid” to Send People to Mars

According to Apollo 8 astronaut Bill Anders, crewed missions to Mars and hyped-up chatter of settling the planet are all a waste of time and money.

Fool’s Errand

According to one of the astronauts aboard NASA’s 1968 Apollo 8 mission, it would be “stupid” and “almost ridiculous” to pursue a crewed mission to Mars.

“What’s the imperative? What’s pushing us to go to Mars? I don’t think the public is that interested,” said Bill Anders, who orbited the Moon before returning to Earth 50 years ago, in a new documentary by BBC Radio 5 Live.

Anders argued that there are plenty of things that NASA could be doing that would be a better use of time and money, like the unmanned InSight rover that recently touched down to study Mars’ interior. The comments, by one of the most accomplished space explorers in human history, illustrates a deep and public philosophical rift about whether the future of spaceflight will be characterized by splashy crewed missions or less expensive automated ones.

Mars Bars

The crux of Anders’ argument on the BBC boils down to his perception that NASA is fueling a vicious cycle of highly-publicized missions that bolster its image, improve its funding, and attract top talent so that it can launch more highly-publicized missions. Sending an astronaut to Mars would dominate the news cycle, but wouldn’t push the frontier of practical scientific knowledge, Anders argued — a mismatch, essentially, between the priorities of NASA and those of the public.

That skepticism places Anders among the ranks of other high-profile critics of NASA, Elon Musk’s SpaceX, and Jeff Bezos’ Blue Origin — all three of which have set their sights on the Red Planet.

For instance, science communicator and advocate Bill Nye predicted last year that no layperson would want to settle Mars. Nye also doubled down last month to say that anyone planning on terraforming Mars must be high on drugs.

Robust Explanation

But Anders’ own Apollo 8 crewmate Frank Borman disagreed, arguing in the documentary that crewed exploration is important.

“I’m not as critical of NASA as Bill is,” Borman told BBC. “I firmly believe that we need robust exploration of our Solar System and I think man is part of that.”

However, even Borman draws the line somewhere between exploration and settlement.

“I do think there’s a lot of hype about Mars that is nonsense,” Borman said. “Musk and Bezos, they’re talking about putting colonies on Mars. That’s nonsense.”

READ MORE: Sending astronauts to Mars would be stupid, astronaut says [BBC]

More on reaching Mars: Four Legal Challenges to Resolve Before Settling on Mars

The post Apollo Astronaut: It Would Be “Stupid” to Send People to Mars appeared first on Futurism.

Read more:
Apollo Astronaut: It Would Be “Stupid” to Send People to Mars

Elon Musk Tweets Image of SpaceX’s Stainless Steel Starship

Stainless steel starship

Big Picture

Christmas came early for Elon Musk’s Twitter followers.

The SpaceX CEO took to the social media platform on Christmas Eve to share a new image of a prototype version of the Starship spacecraft at the company’s Texas testing facilities.

The massive rocket with the ever-changing name — it was previously known as the “Mars Colonial Transporter,” the “Interplanetary Transport System,” and the “Big Falcon Rocket” — could one day ferry passengers to Mars. And Musk’s new photo reveals that the key to making that possible might be a material you’ve got in your kitchen right now.

Stainless Steel Starship

The new Starship is made out of stainless steel,  according to the tweet, a material which handles extreme heat very well — polish it up, and its mirror-like finish will reflect thermal energy far better than the carbon-based materials used for many rockets.

That could help Starship withstand the strain of long-term spaceflight, but stainless steel is heavier than carbon fiber, and keeping weight down is extremely important in space travel.

From an impromptu Twitter Q&A following the reveal of the Starship prototype, we learned that by exposing the stainless steel to extremely cold temperatures — that is, giving it a cryogenic treatment — SpaceX was able to get around the issue of the material weighing more than carbon fiber. According to a Musk tweet, “Usable strength/weight of full hard stainless at cryo is slightly better than carbon fiber, room temp is worse, high temp is vastly better.”

Stainless Steel Starship pic.twitter.com/rRoiEKKrYc

— Elon Musk (@elonmusk) December 24, 2018

Countdown to Liftoff

Perhaps the most exciting Starship revelation of the past week, though, is Musk’s assertion that the prototype could be ready for liftoff in just a few months’ time.

On December 22, he tweeted that he would “do a full technical presentation of Starship” after the prototype’s test flight, which could happen in March or April. If all goes well with that test flight, SpaceX could be one step closer to achieving Musk’s vision of making humanity a multiplanetary species.

READ MORE: SpaceX CEO Elon Musk: Starship Prototype to Have 3 Raptors and “Mirror Finish” [Teslarati]

More on Starship: Elon Musk Just Changed the BFR’s Name for a Fourth Time

The post Elon Musk Tweets Image of SpaceX’s Stainless Steel Starship appeared first on Futurism.

Link:
Elon Musk Tweets Image of SpaceX’s Stainless Steel Starship

Cosmonaut: Hole Was Drilled From Inside Space Station

A Russian cosmonaut who helped investigate damage to the ISS says the hole was apparently drilled from the inside. Here's what it means.

Hole Story

Back in August, the crew on the International Space Station (ISS) repaired a small leak that was allowing air to escape the orbiting outpost — damage that Russia speculated could have been the result of intentional sabotage.

Now, the Associated Press reports that a Russian cosmonaut who helped investigate the damage during a spacewalk earlier this month has said that whoever drilled the hole appeared to do so from the interior of the station — though he’s far from convinced the drilling was an act of sabotage.

The controversy surrounding this hole has revealed that international tensions on Earth can reach outer space, which has long been a haven for international scientific cooperation — but the cosmonaut’s comments are a heartening example of how public figures in the research community can push back against the use of space as a proxy battleground for terrestrial conflicts.

Space Talk

On Monday, the AP reported, cosmonaut Sergei Prokopyev — who took part in the spacewalk to investigate the damage — said at a press conference after returning to Earth that the hole appeared to originate from inside the capsule.

But Prokopyev also “scoffed” at the idea that an ISS crew member drilled the hole, according to the AP.

“You shouldn’t think so badly of our crew,” he said. He also cautioned that “it’s up to the investigative organs to judge when that hole was made” — an apparent suggestion that the hole could have originated in a manufacturing facility on Earth.

Fresh Air

When the leak was first discovered, Russia’s space agency said the damage was probably caused by a micrometeorite. But in a startling about-face, the director of the agency suggested days later that the leak was the result of intentional sabotage — even specifically mentioning evidence of “several attempts at drilling” with a “wavering hand.”

Prokopyev, though, is behaving like a scientist: reporting the facts, but withholding judgment until after the collection of all data — a comforting approach during an era characterized by knee-jerk reactions.

READ MORE: Russia: Hole Drilled From Inside Int’l Space Station Capsule [The Associated Press]

More on the ISS hole: Someone Drilled a Hole in the ISS. Was It a Mistake or Sabotage?

The post Cosmonaut: Hole Was Drilled From Inside Space Station appeared first on Futurism.

Read this article:
Cosmonaut: Hole Was Drilled From Inside Space Station

6 Reasons Why I Gave Up On Libertarianism Return Of Kings

These days, libertarianism tends to be quite discredited. It is now associated with the goofy candidature of Gary Johnson, having a rather narrow range of issueslegalize weed! less taxes!, cucking ones way to politics through sweeping all the embarrassing problems under the carpet, then surrendering to liberal virtue-signaling and endorsing anti-white diversity.

Now, everyone on the Alt-Right, manosphere und so wieser is laughing at those whose adhesion to a bunch of abstract premises leads to endorse globalist capital, and now that Trump officially heads the State, wed be better off if some private companies were nationalized than let to shadowy overlords.

To Americans, libertarianism has been a constant background presence. Its main icons, be them Ayn Rand, Murray Rothbard or Friedrich Hayek, were always read and discussed here and there, and never fell into oblivion although they barely had media attention. The academic and political standing of libertarianism may be marginal, it has always been granted small platforms and resurrected from time to time in the public landscape, one of the most conspicuous examples of it being the Tea Party demonstrations.

To a frog like yours trulyKek being now praised by thousands of well-meaning memers, I can embrace the frog moniker gladlylibertarianism does not have the same standing at all. In French universities, libertarian thinkers are barely discussed, even in classes that are supposed to tackle economics: for one hour spent talking about Hayek, Keynes easily enjoys ten, and the same goes on when comparing the attention given to, respectively, Adam Smith and Karl Marx.

On a wider perspective, a lot of the contemporary French identity is built on Jacobinism, i.e. on crushing underfoot organic regional sociability in the name of a bureaucratized and Masonic republic. The artificial construction of France is exactly the kind of endeavour libertarianism loathes. No matter why the public choices school, for example, is barely studied here: pompous leftist teachers and mediocre fonctionnaires are too busy gushing about themselves, sometimes hiding the emptiness of their life behind a ridiculous epic narrative that turns social achievements into heroic feats, to give a fair hearing to pertinent criticism.

When I found out about libertarianism, I was already sick of the dominant fifty shades of leftism political culture. The gloomy mediocrity of small bureaucrats, including most school teachers, combined with their petty political righteousness, always repelled me. Thus, the discovery oflaissez-faire advocates felt like stumbling on an entirely new scene of thoughtand my initial feeling was vindicated when I found about the naturalism often associated with it, something refreshing and intuitively more satisfying than the mainstream culture-obsessed, biology-denying view.

Libertarianism looked like it could solve everything. More entrepreneurship, more rights to those who actually create wealth and live through the good values of personal responsibility and work ethic, less parasitesbe they bureaucrats or immigrants, no more repressive speech laws. Coincidentally, a new translation of Ayn Rands Atlas Shrugged was published at this time: I devoured it, loving the sense of life, the heroism, the epic, the generally great and achieving ethos contained in it. Arent John Galt and Hank Rearden more appealing than any corrupt politician or beta bureaucrat that pretends to be altruistic while backstabbing his own colleagues and parasitizing the country?

Now, although I still support small-scale entrepreneurship wholeheartedly, I would never defend naked libertarianism, and here is why.

Part of the Rothschild family, where nepotism and consanguinity keep the money in

Unity makes strength, and trust is much easier to cultivate in a small group where everyone truly belongs than in an anonymous great society. Some ethnic groups, especially whites, tend to be instinctively individualistic, with a lot of people favouring personal liberty over belonging, while others, especially Jews, tend to favor extended family business and nepotism.

On a short-term basis, mobile individuals can do better than those who are bound to many social obligations. On the long run, however, extended families manage to create an environment of trust and concentrate capital. And whereas individuals may start cheating each other or scattering their wealth away, thanks to having no proper economic network, families and tribes will be able to invest heavily in some of their members and keep their wealth inside. This has been true for Jewish families, wherever their members work as moneylenders or diamond dealers, for Asians investing in new restaurants or any other business project of their own, and for North Africans taking over pubs and small shops in France.

The latter example is especially telling. White bartenders, butchers, grocers and the like have been chased off French suburbs by daily North African and black violence. No one helped them, everyone being afraid of getting harassed as well and busy with their own business. (Yep, just like what happened and still happens in Rotheram.) As a result, these isolated, unprotected shop-owners sold their outlet for a cheap price and fled. North Africans always covered each others violence and replied in groups against any hurdle, whereas whites lowered their heads and hoped not to be next on the list.

Atlas Shrugged was wrong. Loners get wrecked by groups. Packs of hyenas corner and eat the lone dog.

Libertarianism is not good for individuals on the long runit turns them into asocial weaklings, soon to be legally enslaved by global companies or beaten by groups, be they made of nepotistic family members or thugs.

How the middle classes end up after jobs have been sent overseas and wages lowered

People often believe, thanks to Leftist media and cuckservative posturing, that libertarians are big bosses. This is mostly, if not entirely, false. Most libertarians are middle class guys who want more opportunities, less taxation, and believe that libertarianism will help them to turn into successful entrepreneurs. They may be right in very specific circumstances: during the 2000s, small companies overturned the market of electronics, thus benefiting both to their independent founders and to society as a whole; but ultimately, they got bought by giants like Apple and Google, who are much better off when backed by a corrupt State than on a truly free market.

Libertarianism is a fake alternative, just as impossible to realize as communism: far from putting everyone at its place, it lets ample room to mafias, monopolies, unemployment caused by mechanization and global competition. If one wants the middle classes to survive, one must protect the employment and relative independence of its membersbankers and billionaires be damned.

Spontaneous order helped by a weak government. I hope they at least smoke weed.

A good feature of libertarianism is that it usually goes along with a positive stance on biology and human nature, in contrast with the everything is cultural and ought to be deconstructed left. However, this stance often leads to an exaggerated optimism about human nature. In a society of laissez-faire, the libertarians say, people flourish and the order appears spontaneously.

Well, this is plainly false. As all of the great religions say, after what Christians call the Fall, man is a sinner. If you let children flourish without moral standards and role models, they become spoiled, entitled, manipulative, emotionally fragile and deprived of self-control. If you let women flourish without suspicion, you let free rein to their propensities to hypergamy, hysteria, self-entitlement and everything we can witness in them today. If you let men do as they please, you let them become greedy, envious, and turning into bullies. As a Muslim proverb says, people must be flogged to enter into paradiseand as Aristotle put forth, virtues are trained dispositions, no matter the magnitude of innate talents and propensities.

Michelle The Man Obama and Lying Crooked at a Democrat meeting

When the laissez-faire rules, some will succeed on the market more than others, due to differences in investment, work, and natural abilities. Some will succeed enough to be able to buy someone elses business: this is the natural consequence of differences in wealth and of greed. When corrupt politicians enter the game, things become worse, as they will usually help some large business owners to shield their position against competitorsat the expense of most people, who then lose their independence and live off a wage.

At the end, what we get is a handful of very wealthy individuals who have managed to concentrate most capital and power levers into their hands and a big crowd of low-wage employees ready to cut each others throat for a small promotion, and females waiting in line to get notched by the one per cent while finding the other ninety-nine per cent boring.

Censorship by massive social pressure, monopoly over the institutions and crybullying is perfectly legal. What could go wrong?

On the surface, libertarianism looks good here, because it protects the individuals rights against left-hailing Statism and cuts off the welfare programs that have attracted dozens of millions of immigrants. Beneath, however, things are quite dire. Libertarianism enshrines the leftists right to free speech they abuse from, allows the pressure tactics used by radicals, and lets freethinking individuals getting singled out by SJWs as long as these do not resort to overt stealing or overt physical violence. As for the immigrants, libertarianism tends to oppose the very notion of non-private boundaries, thus letting the local cultures and identities defenseless against both greedy capitalists and subproletarian masses.

Supporting an ideology that allows the leftists to destroy society more or less legally equates to cucking, plain and simple. Desiring an ephemeral cohabitation with rabid ideological warriors is stupid. We should aim at a lasting victory, not at pretending to constrain them through useless means.

Am I the only one to find that Gary Johnson looks like a snail (Spongebob notwithstanding)?

In 2013, one of the rare French libertarians academic teachers, Jean-Louis Caccomo, was forced into a mental ward at the request of his university president. He then spent more than a year getting drugged. Mr. Caccomo had no real psychological problem: his confinement was part of a vicious strategy of pathologization and career-destruction that was already used by the Soviets. French libertarians could have wide denounced the abuse. Nonetheless, most of them freaked out, and almost no one dared to actually defend him publicly.

Why should rational egoists team up and risk their careers to defend one of themselves after all? They would rather posture at confidential social events, rail at organic solidarity and protectionism, or trolling the shit out of individuals of their own social milieu because Ive got the right to mock X, its my right to free speech! The few libertarian people I knew firsthand, the few events I have witnessed in that small milieu, were enough to give me serious doubts about libertarianism: how can a good political ideology breed such an unhealthy mindset?

Political ideologies are tools. They are not ends in themselves. All forms of government arent fit for any people or any era. Political actors must know at least the most important ones to get some inspiration, but ultimately, said actors win on the ground, not in philosophical debates.

Individualism, mindless consumerism, careerism, hedonism are part of the problem. Individual rights granted regardless of ones abilities, situation, and identity are a disaster. Time has come to overcome modernity, not stall in one of its false alternatives. The merchant caste must be regulated, though neither micromanaged or hampered by a parasitic bureaucracy nor denied its members right for small-scale independence. Individual rights must be conditional, boundaries must be restored, minority identities based on anti-white male resentment must be crushed so they cannot devour sociability from the inside again, and the pater familias must assert himself anew.

Long live the State and protectionism as long as they defend the backbone of society and healthy relationships between the sexes, and no quarter for those who think they have a right to wage grievance-mongering against us, no matter if they want to use the State or private companies. At the end, the socialism-libertarianism dichotomy is quite secondary.

Read Next: Sugar Baby Culture In The US Is Creating A Marketplace for Prostitution

The rest is here:

6 Reasons Why I Gave Up On Libertarianism Return Of Kings

John McAfee undeterred by crashing market, says Bitcoin will …

mcafee

Cryptocurrency price has been going downhill over the past few months but John McAfee is undeterred by it. He, like many other crypto enthusiasts, is confident that the market will break record levels of 2017.

The market scenario has been bleak enough to make even bullish players feel uncertain but the cybersecurity expert still holds on to his bet that Bitcoin [BTC] will be valued at $1 million by 2020, according to a Newsweek report. In fact, he said he would eat his own pe**s if it doesnt happen.

McAfee said, I absolutely stand by the million-dollar prediction. It is still two and a half years away, in which two things will happen: bitcoin will continue to grow, and the U.S. dollar and other fiat currencies will devalue.

See also:US SEC clamps down on cryptocurrency airdrops and bounty campaigns

McAfee is not alone in being so radically optimistic about the fintech market. Danial Daychopan, CEO of Plutus, a bitcoin platform, told Newsweek: The overall trend in the value of bitcoin is still heading north, Danial Daychopan, CEO of a bitcoin platform called Plutus, told the media firm.

Bitcoin is a good unit of account, and unlike gold, its the perfect unit of exchange. This is a fundamental fact. Were going to see overall growth of the bitcoin price by the end of the year: I predict the value of bitcoin may reach 30,000, the year after 50,000, Daychopan added.

See also:Cryptocurrency scams on the rise, warns UKs financial regulator FCA

When investment into cryptocurrency swelled in 2017, and Bitcoins value skyrocketed, many experts called it a bubble. However, bullish players didnt give a hoot.

Cryptocurrencies have gone through many bubbles already in their short life cycle, said Liina Laas-Billson, who has worked in the fintech field for half a decade.

The previous bitcoin boom resulted in the price crashing 80 percent from an all-time high of over $1000 per bitcoin. It took three years to climb back up to the previous price rate, she said.

Will you place your bets on John McAfees prediction or be drowned in FUD?

Image via Shutterstock

Join ourTelegramgroup

Here is the original post:

John McAfee undeterred by crashing market, says Bitcoin will ...

John McAfee on Bitcoin: You Cant Stop It, Reiterates $1 …

/latest/2018/05/john-mcafee-on-bitcoin-you-cant-stop-it-reiterates-1-million-by-2020-price-target/

John McAfee on Bitcoin: You Cant Stop It, Reiterates $1 Million by 2020 Price Target

john-mcafee-on-bitcoin-you-cant-stop-it-reiterates-1-million-by-2020-price-target

Cybersecurity pioneer John McAfee, in a recent interview, reiterated the famous prediction he made on 29 November 2017 on Twitter -- namely that one bitcoinwould be worth $1 million by 2020 -- and explained how he arrived at that figure.

The first question was: "Why do you think Bitcoin will go to $1 million by 2020?"

McAfee answered: "There all kinds of ways to do it. The most common way is to take number of users of Bitcoin... because as Bitcoin expands its user base, both people who hold the coin and people who accept the coin for payment... It's sky-rocketing... It's not going to disappear. It can't possibly go to zero. It can only grow as the user base grows. So, it's impossible... Listen, it's Pandora's Box... The box has been opened...

Digital currencies will replace fiat currencies. Why? I have a wallet on my phone which is an entire bank... Lending money, sending wires, I don't have to go to the bank. It takes me 10 seconds to do a wire... Do you think that is going to go to zero and we're going to go back to this incredibly dull and bizarre way of using money with banks?"

The next question was why McAfee was not worried about the effect of regulations on Bitcoin.

McAfee answered: "Congress can make all the laws it wants. Obviously, they can outlaw Bitcoin. You are assuming there is a possibility of enforcing those laws... You cannot stop a distributed system that is worldwide. There is no law that you can make... If you had one person per citizen in the country... You still couldn't do it because the enforcer would have to sleep at some point... When exchanges were centralized, you could shut down an exchange. You can't do it anymore. They have now decentralized theexchanges... How will youenforce these magical laws that Congress is going to make? Jamie Dimon has no power in this world..."

The interviewer then asked what if the government tries to make it difficult for people to spend their bitcoin.

McAfee responded: "How are you going to make it difficult? What are you going to do to make it difficult? ... The banks are not involved. The banks have nothing to do with it... The misconception is that legislation can have teeth... You can't stop Bitcoin... Technologically impossible!"

Next, McAfee was asked if hehad come up with the $1 million price target based on a mathematical formula or if he had come up with the figure based on some assumptions about its usage.

He replied: "I looked at it in two way... [First] Ilooked at the number of users, the user growth... I came up maybe with three and seven million... [Second]I came at it the mining way, which is I believe a lot more valid... The last bitcoin to be mined... It is going to cost four to five billion dollars to mine that last coin and miners are going to do it. The value of thatbitcoin has got to bemore than the work put in to mine it...So if it cost even one billion dollars, why don't you work backwards from there and find out what the end of 2020 it is going to be? ... $5 million it is probably the thing... So, $1 million is way outside the lower range, so I am very safe... And it is not just Bitcoin... There are 2000 other coins... You cannot stop them. You cannotchange them. You cannot control them. Accept them and try to fix your life to get in line with reality... In the 1930s we made it a law that you couldn't drink alcohol. We consumed more alcohol per capita during that period than anyperiod in American history..."

Although quite a few famous Bitcoin bulls have given what seem like quite outlandish figures as their price targets, such asTim Draper's $250,000 price target by 2022,McAfee's is by far the highest and the period of time -- only two years -- to reach that target is the shortest we have come across this year.

More here:

John McAfee on Bitcoin: You Cant Stop It, Reiterates $1 ...

The End of Moores Law Rodney Brooks

I have been working on an upcoming post about megatrends and how they drive tech. I had included the end of Moores Law to illustrate how the end of a megatrend might also have a big influence on tech, but that section got away from me, becoming much larger than the sections on each individual current megatrend. So I decided to break it out into a separate post and publish it first. Here it is.

Moores Law, concerning what we put on silicon wafers, is over after a solid fifty year run that completely reshaped our world. But that end unleashes lots of new opportunities.

Moore, Gordon E.,Cramming more components onto integrated circuits,Electronics, Vol 32, No. 8, April 19, 1965.

Electronicswas a trade journal that published monthly, mostly, from 1930 to 1995. Gordon Moores four and a half page contribution in 1965 was perhaps its most influential article ever. That article not only articulated the beginnings, and it was the very beginnings, of a trend, but the existence of that articulation became a goal/law that has run the silicon based circuit industry (which is the basis of every digital device in our world) for fifty years. Moore was a Cal Tech PhD, cofounder in 1957 of Fairchild Semiconductor, and head of its research and development laboratory from1959. Fairchild had been founded to make transistors from silicon at a time when they were usually made from much slower germanium.

One can find many files on the Web that claim to be copies of the original paper, but I have noticed that some of them have the graphs redrawn and that they are sometimes slightly different from the ones that I have always taken to be the originals. Below I reproduce two figures from the original that as far as I can tell have only been copied from an original paper version of the magazine, with no manual/human cleanup.

The first one that I reproduce here is the money shot for the origin of Moores Law. There was however an equally important earlier graph in the paper which was predictive of the future yield over time of functional circuits that could be made from silicon. It had less actual data than this one, and as well see, that is really saying something.

This graph is about the number of components on an integrated circuit. An integrated circuit is made through a process that is like printing. Light is projected onto a thin wafer of silicon in a number of different patterns, while different gases fill the chamber in which it is held. The different gases cause different light activated chemical processes to happen on the surface of the wafer, sometimes depositing some types of material, and sometimes etching material away. With precise masks to pattern the light, and precise control over temperature and duration of exposures, a physical two dimensional electronic circuit can be printed. The circuit has transistors, resistors, and other components. Lots of them might be made on a single wafer at once, just as lots of letters are printed on a single page at one. The yield is how many of those circuits are functionalsmall alignment or timing errors in production can screw up some of the circuits in any given print. Then the silicon wafer is cut up into pieces, each containing one of the circuits and each is put inside its own plastic package with little legs sticking out as the connectorsif you have looked at a circuit board made in the last forty years you have seen it populated with lots of integrated circuits.

The number of components in a single integrated circuit is important. Since the circuit is printed it involves no manual labor, unlike earlier electronics where every single component had to be placed and attached by hand. Now a complex circuit which involves multiple integrated circuits only requires hand construction (later this too was largely automated), to connect up a much smaller number of components. And as long as one has a process which gets good yield, it is constant time to build a single integrated circuit, regardless of how many components are in it. That means less total integrated circuits that need to be connected by hand or machine. So, as Moores papers title references,crammingmore components into a single integrated circuit is a really good idea.

The graph plots the logarithm base two of the number ofcomponentsin an integrated circuit on the vertical axis against calendar years on the horizontal axis. Every notch upwards on the left doubles the number of components. So while means components, means components. That is a thousand fold increase from 1962 to 1972.

There are two important things to note here.

The first is that he is talking aboutcomponentson an integrated circuit, not just the number of transistors. Generally there are many more components thantransistors, though the ratio did drop over time as different fundamental sorts of transistors were used. But in later years Moores Law was often turned into purely a count of transistors.

The other thing is that there are only four real data points here in this graph which he published in 1965. In 1959 the number of components is , i.e., that is not about anintegratedcircuit at all, just about single circuit elementsintegrated circuits had not yet been invented. So this is a null data point. Then he plots four actual data points, which we assume were taken from what Fairchild could produce, for 1962, 1963, 1964, and 1965, having 8, 16, 32, and 64 components. That is a doubling every year. It is an exponential increase in the true sense of exponential.

What is the mechanism for this, how can this work? It works because it is in the digital domain, the domain ofyesorno, the domain of or .

In the last half page of the four and a half page article Moore explains the limitations of his prediction, saying that for some things, like energy storage, we will not see his predicted trend. Energy takes up a certain number of atoms and their electrons to store a given amount, so you can not just arbitrarily change the number of atoms and still store the same amount of energy. Likewise if you have a half gallon milk container you can not put a gallon of milk in it.

But the fundamental digital abstraction isyesorno. A circuit element in an integrated circuit just needs to know whether a previous element said yes or no, whether there is a voltage or current there or not. In the design phase one decides above how many volts or amps, or whatever, means yes, and below how many means no. And there needs to be a good separation between those numbers, a significant no mans land compared to the maximum and minimum possible. But, the magnitudes do not matter.

I like to think of it like piles of sand. Is there a pile of sand on the table or not? We might have a convention about how big a typical pile of sand is. But we can make it work if we halve the normal size of a pile of sand. We can still answer whether or not there is a pile of sand there using just half as many grains of sand in a pile.

And then we can halve the number again. And the digital abstraction of yes or no still works. And we can halve it again, and it still works. And again, and again, and again.

This is what drives Moores Law, which in its original form said that we could expect to double the number of components on an integrated circuit every year for 10 years, from 1965 to 1975. That held up!

Variations of Moores Law followed; they were all about doubling, but sometimes doubling different things, and usually with slightly longer time constants for the doubling. The most popular versions were doubling of the number of transistors, doubling of the switching speed of those transistors (so a computer could run twice as fast), doubling of the amount of memory on a single chip, and doubling of the secondary memory of a computeroriginally on mechanically spinning disks, but for the last five years in solid state flash memory. And there were many others.

Lets get back to Moores original law for a moment. The components on an integrated circuit are laid out on a two dimensional wafer of silicon. So to double the number of components for the same amount of silicon you need to double the number of components per unit area. That means that the size of a component, in each linear dimension of the wafer needs to go down by a factor of . In turn, that means that Moore was seeing the linear dimension of each component go down to of what it was in a year, year over year.

But why was it limited to just a measly factor of two per year? Given the pile of sand analogy from above, why not just go to a quarter of the size of a pile of sand each year, or one sixteenth? It gets back to the yield one gets, the number of working integrated circuits, as you reduce the component size (most commonly calledfeature size). As the feature size gets smaller, the alignment of the projected patterns of light for each step of the process needs to get more accurate. Since , approximately, it needs to get better by as you halve the feature size. And because impurities in the materials that are printed on the circuit, the material from the gasses that are circulating and that are activated by light, the gas needs to get more pure, so that there are fewer bad atoms in each component, now half the area of before. Implicit in Moores Law, in its original form, was the idea that we could expect the production equipment to get better by about per year, for 10 years.

For various forms of Moores Law that came later, the time constant stretched out to 2 years, or even a little longer, for a doubling, but nevertheless the processing equipment has gotten that better time period over time period, again and again.

To see the magic of how this works, lets just look at 25 doublings. The equipment has to operate with things times smaller, i.e., roughly 5,793 times smaller. But we can fit more components in a single circuit, which is 33,554,432 times more. The accuracy of our equipment has improved 5,793 times, but that has gotten a further acceleration of 5,793 on top of the original 5,793 times due to the linear to area impact. That is where the payoff of Moores Law has come from.

In his original paper Moore only dared project out, and only implicitly, that the equipment would get better every year for ten years. In reality, with somewhat slowing time constants, that has continued to happen for 50 years.

Now it is coming to an end. But not because the accuracy of the equipment needed to give good yields has stopped improving. No. Rather it is because those piles of sand we referred to above have gotten so small that they only contain a single metaphorical grain of sand. We cant split the minimal quantum of a pile into two any more.

Perhaps the most remarkable thing is Moores foresight into how this would have an incredible impact upon the world. Here is the first sentence of his second paragraph:

Integrated circuits will lead to such wonders as home computersor at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment.

This was radical stuff in 1965. So called mini computers were still the size of a desk, and to be useful usually had a few peripherals such as tape units, card readers, or printers, that meant they would be hard to fit into a home kitchen of the day, even with the refrigerator, oven, and sink removed. Most people had never seen a computer and even fewer had interacted with one, and those who had, had mostly done it by dropping off a deck of punched cards, and a day later picking up a printout from what the computer had done when humans had fed the cards to the machine.

The electrical systems of cars were unbelievably simple by todays standards, with perhaps half a dozen on off switches, and simple electromechanical devices to drive the turn indicators, windshield wipers, and the distributor which timed the firing of the spark plugsevery single function producing piece of mechanism in auto electronics was big enough to be seen with the naked eye. And personal communications devices were rotary dial phones, one per household, firmly plugged into the wall at all time. Or handwritten letters than needed to be dropped into the mail box.

That sentence quoted above, given when it was made, is to me the bravest and most insightful prediction of technology future that we have ever seen.

By the way, the first computer made from integrated circuits was the guidance computer for the Apollo missions, one in the Command Module, and one in the Lunar Lander. The integrated circuits were made by Fairchild, Gordon Moores company. The first version had 4,100 integrated circuits, each implementing a single 3 input NOR gate. The more capable manned flight versions, which first flew in 1968, had only 2,800 integrated circuits, each implementing two 3 input NOR gates. Moores Law had its impact on getting to the Moon, even in the Laws infancy.

In the original magazine article this cartoon appears:

At a fortieth anniversary of Moores Law at the Chemical Heritage Foundationin Philadelphia I asked Dr. Moore whether this cartoon had been his idea. He replied that he had nothing to do with it, and it was just there in the magazine in the middle of his article, to his surprise.

Without any evidence at all on this, my guess is that the cartoonist was reacting somewhat skeptically to the sentence quoted above. The cartoon is set in a department store, as back then US department stores often had a Notions department, although this was not something of which I have any personal experience as they are long gone (and I first set foot in the US in 1977). It seems that notions is another word for haberdashery, i.e., pins, cotton, ribbons, and generally things used for sewing. As still today, there is also aCosmeticsdepartment. And plop in the middle of them is theHandy Home Computersdepartment, with the salesman holding a computer in his hand.

I am guessing that the cartoonist was making fun of this idea, trying to point out the ridiculousness of it. It all came to pass in only 25 years, including being sold in department stores. Not too far from the cosmetics department. But the notions departments had all disappeared. The cartoonist was right in the short term, but blew it in the slightly longer term.

There were many variations on Moores Law, not just his original about the number of components on a single chip.

Amongst the many there was a version of the law about how fast circuits could operate, as the smaller the transistors were the faster they could switch on and off. There were versions of the law for how much RAM memory, main memory for running computer programs, there would be and when. And there were versions of the law for how big and fast disk drives, for file storage, would be.

This tangle of versions of Moores Law had a big impact on how technology developed. I will discuss three modes of that impact; competition, coordination, and herd mentality in computer design.

Competition

Memory chips are where data and programs are stored as they are run on a computer. Moores Law applied to the number of bits of memory that a single chip could store, and a natural rhythm developed of that number of bits going up my a multiple of four on a regular but slightly slowing basis. By jumping over just a doubling, the cost of the silicon foundries could me depreciated over long enough time to keep things profitable (today a silicon foundry is about a $7B capital cost!), and furthermore it made sense to double the number of memory cells in each dimension to keep the designs balanced, again pointing to a step factor of four.

In the very early days of desktop PCs memory chips had bits. The memory chips were called RAM (Random Access Memoryi.e., any location in memory took equally long to access, there were no slower of faster places), and a chip of this size was called a 16K chip, where K means not exactly 1,000, but instead 1,024 (which is ). Many companies produced 16K RAM chips. But they all knew from Moores Law when the market would be expecting 64K RAM chips to appear. So they knew what they had to do to not get left behind, and they knew when they had to have samples ready for engineers designing new machines so that just as the machines came out their chips would be ready to be used having been designed in. And they could judge when it was worth getting just a little ahead of the competition at what price. Everyone knew the game (and in fact all came to a consensus agreement on when the Moores Law clock should slow down just a little), and they all competed on operational efficiency.

Coordination

Technology Reviewtalks about this in their story on the end of Moores Law. If you were the designer of a new computer box for a desktop machine, or any other digital machine for that matter, you could look at when you planned to hit the market and know what amount of RAM memory would take up what board space because you knew how many bits per chip would be available at that time. And you knew how much disk space would be available at what price and what physical volume (disks got smaller and smaller diameters just as they increased the total amount of storage). And you knew how fast the latest processor chip would run. And you knew what resolution display screen would be available at what price. So a couple of years ahead you could put all these numbers together and come up with what options and configurations would make sense by the exact time whenyou were going to bring your new computer to market.

The company that sold the computers might make one or two of the critical chips for their products but mostly they bought other components from other suppliers. The clockwork certainty of Moores Law let them design a new product without having horrible surprises disrupt their flow and plans. This really let the digital revolution proceed. Everything was orderly and predictable so there were fewer blind alleys to follow. We had probably the single most sustained continuous and predictable improvement in any technology over the history of mankind.

Herd mentality in computer design

But with this good came some things that might be viewed negatively (though Im sure there are some who would argue that they were all unalloyed good). Ill take up one of these as the third thing to talk about that Moores Law had a major impact upon.

A particular form of general purpose computer design had arisen by the time that central processors could be put on a single chip (see the Intel 4004 below), and soon those processors on a chip, microprocessors as they came to be known, supported that general architecture. That architecture is known as thevon Neumann architecture.

A distinguishing feature of this architecture is that there is a large RAM memory which holds both instructions and datamade from the RAM chips we talked about above under coordination. The memory is organized into consecutive indexable (or addressable) locations, each containing the same number of binary bits, or digits. The microprocessor itself has a few specialized memory cells, known as registers, and an arithmetic unit that can do additions, multiplications, divisions (more recently), etc. One of those specialized registers is called the program counter (PC), and it holds an address in RAM for the current instruction. The CPU looks at the pattern of bits in that current instruction location and decodes them into what actions it should perform. That might be an action to fetch another location in RAM and put it into one of the specialized registers (this is called a LOAD), or to send the contents the other direction (STORE), or to take the contents of two of the specialized registers feed them to the arithmetic unit, and take their sum from the output of that unit and store it in another of the specialized registers. Then the central processing unit increments its PC and looks at the next consecutive addressable instruction. Some specialized instructions can alter the PC and make the machine go to some other part of the program and this is known as branching. For instance if one of the specialized registers is being used to count down how many elements of an array of consecutive values stored in RAM have been added together, right after the addition instruction there might be an instruction to decrement that counting register, and then branch back earlier in the program to do another LOAD and add if the counting register is still more than zero.

Thats pretty much all there is to most digital computers. The rest is just hacks to make them go faster, while still looking essentially like this model. But note that the RAM is used in two ways by a von Neumann computerto contain data for a program and to contain the program itself. Well come back to this point later.

With all the versions of Moores Law firmly operating in support of this basic model it became very hard to break out of it. The human brain certainly doesnt work that way, so it seems that there could be powerful other ways to organize computation. But trying to change the basic organization was a dangerous thing to do, as the inexorable march of Moores Law based existing architecture was going to continue anyway. Trying something new would most probably set things back a few years. So brave big scale experiments like the Lisp MachineorConnection Machinewhich both grew out of the MIT Artificial Intelligence Lab (and turned into at least three different companies) and Japans fifth generation computerproject (which played with two unconventional ideas, data flow and logical inference) all failed, as before long the Moores Law doubling conventional computers overtook the advanced capabilities of the new machines, and software could better emulate the new ideas.

Most computer architects were locked into the conventional organizations of computers that had been around for decades. They competed on changing the coding of the instructions to make execution of programs slightly more efficient per square millimeter of silicon. They competed on strategies to cache copies of larger and larger amounts of RAM memory right on the main processor chip. They competed on how to put multiple processors on a single chip and how to share the cached information from RAM across multiple processor units running at once on a single piece of silicon. And they competed on how to make the hardware more predictive of what future decisions would be in a running program so that they could precompute the right next computations before it was clear whether they would be needed or not. But, they were all locked in to fundamentally the same way of doing computation. Thirty years ago there were dozens of different detailed processor designs, but now they fall into only a small handful of families, the X86, the ARM, and the PowerPC. The X86s are mostly desktops, laptops, and cloud servers. The ARM is what we find in phones and tablets. And you probably have a PowerPC adjusting all the parameters of your cars engine.

The one glaring exception to the lock in caused by Moores Law is that of Graphical Processing Units, orGPUs. These are different from von Neumann machines. Driven by wanting better video performance for video and graphics, and in particular gaming, the main processor getting better and better under Moores Law was just not enough to make real time rendering perform well as the underlying simulations got better and better. In this case a new sort of processor was developed. It was not particularly useful for general purpose computations but it was optimized very well to do additions and multiplications on streams of data which is what is needed to render something graphically on a screen. Here was a case where a new sort of chip got added into the Moores Law pool much later than conventional microprocessors, RAM, and disk. The new GPUs did not replace existing processors, but instead got added as partners where graphics rendering was needed. I mention GPUs here because it turns out that they are useful for another type of computation that has become very popular over the last three years, and that is being used as an argument that Moores Law is not over. I still think it is and will return to GPUs in the next section.

As I pointed out earlier we can not halve a pile of sand once we are down to piles that are only a single grain of sand. That is where we are now, we have gotten down to just about one grain piles of sand. Gordon Moores Law in its classical sense is over. SeeThe Economistfrom March of last year for a typically thorough, accessible, and thoughtful report.

I earlier talked about thefeature size of an integrated circuit and how with every doubling that size is divided by . By 1971 Gordon Moore was at Intel, and they released their first microprocessor on a single chip, the 4004 with 2,300 transistors on 12 square millimeters of silicon, with a feature size of 10 micrometers, written 10m. That means that the smallest distinguishable aspect of any component on the chip was th of a millimeter.

Since then the feature size has regularly been reduced by a factor of , or reduced to of its previous size, doubling the number of components in a given area, on a clockwork schedule. The schedule clock has however slowed down. Back in the era of Moores original publication the clock period was a year. Now it is a little over 2 years. In the first quarter of 2017 we are expecting to see the first commercial chips in mass market products with a feature size of 10 nanometers, written 10nm. That is 1,000 times smaller than the feature size of 1971, or 20 applications of the rule over 46 years. Sometimes the jump has been a little better than , and so we actually seen 17 jumps from10m down to 10nm. You can see them listed in Wikipedia. In 2012 the feature size was 22nm, in 2014 it was 14nm, now in the first quarter of 2017 we are about to see 10nm shipped to end users, and it is expected that we will see 7nm in 2019 or so. There are stillactive areas of researchworking on problems that are yet to be solved to make 7nm a reality, but industry is confident that it will happen. There are predictions of 5nm by 2021, but a year ago there was still much uncertaintyover whether the engineering problems necessary to do this could be solved and whether they would be economically viable in any case.

Once you get down to 5nm features they are only about 20 silicon atoms wide. If you go much below this the material starts to be dominated by quantum effects and classical physical properties really start to break down. That is what I mean by only one grain of sand left in the pile.

Todays microprocessors have a few hundred square millimeters of silicon, and 5 to 10 billion transistors. They have a lot of extra circuitry these days to cache RAM, predict branches, etc., all to improve performance. But getting bigger comes with many costs as they get faster too. There is heat to be dissipated from all the energy used in switching so many signals in such a small amount of time, and the time for a signal to travel from one side of the chip to the other, ultimately limited by the speed of light (in reality, in copper it is about less), starts to be significant. The speed of light is approximately 300,000 kilometers per second, or 300,000,000,000 millimeters per second. So light, or a signal, can travel 30 millimeters (just over an inch, about the size of a very large chip today) in no less than one over 10,000,000,000 seconds, i.e., no less than one ten billionth of a second.

Todays fastest processors have a clock speed of 8.760GigaHertz, which means by the time the signal is getting to the other side of the chip, the place if came from has moved on to the next thing to do. This makes synchronization across a single microprocessor something of a nightmare, and at best a designer can know ahead of time how late different signals from different parts of the processor will be, and try to design accordingly. So rather than push clock speed further (which is also hard) and rather than make a single microprocessor bigger with more transistors to do more stuff at every clock cycle, for the last few years we have seen large chips go to multicore, with two, four, or eight independent microprocessors on a single piece of silicon.

Multicore has preserved the number of operations done per second version of Moores Law, but at the cost of a simple program not being sped up by that amountone cannot simply smear a single program across multiple processing units. For a laptop or a smart phone that is trying to do many things at once that doesnt really matter, as there are usually enough different tasks that need to be done at once, that farming them out to different cores on the same chip leads to pretty full utilization. But that will not hold, except for specialized computations, when the number of cores doubles a few more times. The speed up starts to disappear as silicon is left idle because there just arent enough different things to do.

Despite the arguments that I presented a few paragraphs ago about why Moores Law is coming to a silicon end, many people argue that it is not, because we are finding ways around those constraints of small numbers of atoms by going to multicore and GPUs. But I think that is changing the definitions too much.

Here is a recent chart that Steve Jurvetson, cofounder of the VC firm DFJ (Draper Fisher Jurvetson), posted on his FaceBook page. He said it is an update of an earlier chart compiled by Ray Kurzweil.

In this case the left axis is a logarithmically scaled count of the number of calculations per second per constant dollar. So this expresses how much cheaper computation has gotten over time. In the 1940s there are specialized computers, such as the electromagnetic computers built to break codes at Bletchley Park. By the 1950s they become general purpose, von Neuman style computers and stay that way until the last few points.

The last two points are both GPUs, the GTX 450 and the NVIDIA Titan X. Steve doesnt label the few points before that, but in every earlier version of a diagram that I can find on the Web (and there are plenty of them), the points beyond 2010 are all multicore. First dual cores, and then quad cores, such as Intels quad core i7 (and I am typing these words on a 2.9MHz version of that chip, powering my laptop).

That GPUs are there and that people are excited about them is because besides graphics they happen to be very good at another very fashionable computation. Deep learning, a form of something known originally as back propagation neural networks, has had a big technological impact recently. It is what has made speech recognition so fantastically better in the last three years that Apples Siri, Amazons Echo, and Google Home are useful and practical programs and devices. It has also made image labeling so much better than what we had five years ago, and there is much experimentation with using networks trained on lots of road scenes as part of situational awareness for self driving cars. For deep learning there is a training phase, usually done in the cloud, on millions of examples. That produces a few million numbers which represent the network that is learned. Then when it is time to recognize a word or label an image that input is fed into a program simulating the network by doing millions of multiplications and additions. Coincidentally GPUs just happen to perfect for the way these networks are structured, and so we can expect more and more of them to be built into our automobiles. Lucky break for GPU manufacturers! While GPUs can do lots of computations they dont work well on just any problem. But they are great for deep learning networks and those are quickly becoming the flavor of the decade.

While rightly claiming that we continue to see exponential growth as in the chart above, exactly what is being measured has changed. That is a bit of a sleight of hand.

And I think that change will have big implications.

I think the end of Moores Law, as I have defined the end, will bring about a golden new era of computer architecture. No longer will architects need to cower atthe relentless improvements that they know others will get due to Moores Law. They will be able to take the time to try new ideas out in silicon, now safe in the knowledge that a conventional computer architecture will not be able to do the same thing in just two or four years in software. And the new things they do may not be about speed. They might be about making computation better in other ways.

Machine learning runtime

We are seeing this with GPUs as runtime engines for deep learning networks. But we are also seeing some more specific architectures. For instance, for about a a year Google has had their own chips called TensorFlow Units (or TPUs) that save power for deep learning networks by effectively reducing the number of significant digits that are kept around as neural networks work quite well at low precision. Google has placed many of these chips in the computers in their server farms, or cloud, and are able to use learned networks in various search queries, at higher speed for lower electrical power consumption.

Special purpose silicon

Typical mobile phone chips now have four ARM processor cores on a single piece of silicon, plus some highly optimized special purpose processors on that same piece of silicon. The processors manage data flowing from cameras and optimizing speech quality, and even on some chips there is a special highly optimized processor for detecting human faces. That is used in the camera application, youve probably noticed little rectangular boxes around peoples faces as you are about to take a photograph, to decide what regions in an image should be most in focus and with the best exposure timingthe faces!

New general purpose approaches

We are already seeing the rise of special purpose architectures for very specific computations. But perhaps we will see more general purpose architectures but with a a different style of computation making a comeback.

Conceivably the dataflow and logic models of the Japanese fifth generation computer project might now be worth exploring again. But as we digitalize the world the cost of bad computer security will threaten our very existence. So perhaps if things work out, the unleashed computer architects can slowly start to dig us out of our current deplorable situation.

Secure computing

We all hear about cyber hackers breaking into computers, often half a world away, or sometimes now in a computer controlling the engine, and soon everything else, of a car as it drives by. How can this happen?

Cyber hackers are creative but many ways that they get into systems are fundamentally through common programming errors in programs built on top of the von Neumann architectures we talked about before.

A common case is exploiting something known as buffer overrun. A fixed size piece of memory is reserved to hold, say, the web address that one can type into a browser, or the Google query box. If all programmers wrote very careful code and someone typed in way too many characters those past the limit would not get stored in RAM at all. But all too often a programmer has used a coding trick that is simple, and quick to produce, that does not check for overrun and the typed characters get put into memory way past the end of the buffer, perhaps overwriting some code that the program might jump to later. This relies on the feature of von Neumann architectures that data and programs are stored in the same memory. So, if the hacker chooses some characters whose binary codes correspond to instructions that do something malicious to the computer, say setting up an account for them with a particular password, then later as if by magic the hacker will have a remotely accessible account on the computer, just as many other human and program services may. Programmers shouldnt oughta make this mistake but history shows that it happens again and again.

Another common way in is that in modern web services sometimes the browser on a lap top, tablet, or smart phone, and the computers in the cloud need to pass really complex things between them. Rather than the programmer having to know in advance all those complex possible things and handle messages for them, it is set up so that one or both sides can pass little bits of source code of programs back and forth and execute them on the other computer. In this way capabilities that were never originally conceived of can start working later on in an existing system without having to update the applications. It is impossible to be sure that a piece of code wont do certain things, so if the programmer decided to give a fully general capability through this mechanism there is no way for the receiving machine to know ahead of time that the code is safe and wont do something malicious (this is a generalization of the halting problem I could go on and on but I wont here). So sometimes a cyber hacker can exploit this weakness and send a little bit of malicious code directly to some service that accepts code.

Beyond that cyber hackers are always coming up with new inventive ways inthese have just been two examples to illustrate a couple of ways of how itis currently done.

It is possible to write code that protects against many of these problems, but code writing is still a very human activity, and there are just too many human-created holes that can leak, from too many code writers. One way to combat this is to have extra silicon that hides some of the low level possibilities of a von Neumann architecture from programmers, by only giving the instructions in memory a more limited set of possible actions.

This is not a new idea. Most microprocessors have some version of protection rings which let more and more untrusted code only have access to more and more limited areas of memory, even if they try to access it with normal instructions. This idea has been around a long time but it has suffered from not having a standard way to use or implement it, so most software, in an attempt to be able to run on most machines, usually only specifies two or at most three rings of protection. That is a very coarse tool and lets too much through. Perhaps now the idea will be thought about more seriously in an attempt to get better security when just making things faster is no longer practical.

Another idea, that has mostly only been implemented in software, with perhaps one or two exceptions, is called capability based security, through capability based addressing. Programs are not given direct access to regions of memory they need to use, but instead are given unforgeable cryptographically sound reference handles, along with a defined subset of things they are allowed to do with the memory. Hardware architects might now have the time to push through on making this approach completely enforceable, getting it right once in hardware so that mere human programmers pushed to get new software out on a promised release date can not screw things up.

From one point of view the Lisp Machines that I talked about earlier were built on a very specific and limited version of a capability based architecture. Underneath it all, those machines were von Neumann machines, but the instructions they could execute were deliberately limited. Through the use of something called typed pointers, at the hardware level, every reference to every piece of memory came with restrictions on what instructions could do with that memory, based on the type encoded in the pointer. And memory could only be referenced by a pointer to the start of a chunk of memory of a fixed size at the time the memory was reserved. So in the buffer overrun case, a buffer for a string of characters would not allow data to be written to or read from beyond the end of it. And instructions could only be referenced from another type of pointer, a code pointer. The hardware kept the general purpose memory partitioned at a very fine grain by the type of pointers granted to it when reserved. And to a first approximation the type of a pointer could never be changed, nor couldthe actual address in RAM be seen by any instructions that had access to a pointer.

There have been ideas out there for a long time on how to improve security through this use of hardware restrictions on the general purpose von Neumann architecture. I have talked about a few of them here. Now I think we can expect this to become a much more compelling place for hardware architects to spend their time, as security of our computational systems becomes a major achilles heel on the smooth running of our businesses, our lives, and our society.

Quantum computers

Quantum computers are a largely experimental and very expensive at this time technology. With the need to cool them to physics experiment level ultra cold, and the expense that entails, to the confusion over how much speed up they might give over conventional silicon based computers and for what class of problem, they are a large investment, high risk research topic at this time. I wont go into all the arguments (I havent read them all, and frankly I do not have the expertise that would make me confident in any opinion I might form) butScott Aaronsons blogon computational complexity and quantum computation is probably the best source for those interested. Claims on speedups either achieved or hoped to be achieved on practical problems range from a factor of 1 to thousands (and I might have that upper bound wrong). In the old days just waiting 10 or 20 years would let Moores Law get you there. Instead we have seen well over a decade of sustained investment in a technology that people are still arguing over whether it can ever work. To me this is yet more evidence that the end of Moores Law is encouraging new investment and new explorations.

Unimaginable stuff

Even with these various innovations around, triggered by the end of Moores Law, the best things we might see may not yet be in the common consciousness. I think the freedom to innovate, without the overhang of Moores Law, the freedom to take time to investigate curious corners, may well lead to a new garden of Eden in computational models. Five to ten years from now we may see a completely new form of computer arrangement, in traditional silicon (not quantum), that is doing things and doing them faster than we can today imagine. And with a further thirty years of development those chips might be doing things that would today be indistinguishable from magic, just as todays smart phone would have seemed like utter magic to 50 year ago me.

Many times the popular press, or people who should know better, refer to something that is increasing a lot as exponential. Something is only truly exponential if there is a constant ratio in size between any two points in time separated by the same amount. Here the ratio is , for any two points a year apart. The misuse of the term exponential growth is widespread and makes me cranky.

Why the Chemical Heritage Foundation for this celebration? Both of Gordon Moores degrees (BS and PhD) were in physical chemistry!

For those who read my first blog, once again seeRoy Amaras Law.

I had been a post-doc at the MIT AI Lab and loved using Lisp Machines there, but when I left and joined the faculty at Stanford in 1983 I realized that the more conventional SUN workstationsbeing developed there and at spin-off company Sun Microsystemswould win out in performance very quickly. So I built a software based Lisp system (which I called TAIL (Toy AI Language) in a nod to the naming conventions of most software at the Stanford Artificial Intelligence Lab, e.g., BAIL, FAIL, SAIL, MAIL) that ran on the early Sun workstations, which themselves used completely generic microprocessors. By mid 1984 Richard Gabriel, I, and others had started a company called Lucidin Palo Alto to compete on conventional machines with the Lisp Machine companies. We used my Lisp compiler as a stop gap, but as is often the case with software, that was still the compiler used by Lucid eight years later when it ran on 19 different makes of machines. I had moved back to MIT to join the faculty in late 1984, and eventually became the director of the Artificial Intelligence Lab there (and then CSAIL). But for eight years, while teaching computer science and developing robots by day, I also at night developed and maintained my original compiler as the work horse of Lucid Lisp. Just as the Lisp Machine companies got swept away so too eventually did Lucid. Whereas the Lisp Machine companies got swept away by Moores Law, Lucid got swept away as the fashion in computer languages shifted to a winner take all world, for many years, of C.

Full disclosure. DFJ is one of the VCs who have invested in my company Rethink Robotics.

The rest is here:

The End of Moores Law Rodney Brooks

Nihilism – Simple English Wikipedia, the free encyclopedia

Nihilism comes from the Latin nihil, or nothing. It is the belief that values are falsely invented. The term 'nihilism' can also be used to describe the idea that life, or the world, has no distinct meaning or purpose. Nihilists believe that there are no true morals. Many people think of the German philosopher Friedrich Nietzsche when they think about nihilism, because he said that morals were invented. But in his books, Nietzsche said that people needed to create their own morals to get over nihilism.

Mikhail Bakunins (18141876) Reaction in Germany (1842) included this passage: "Let us therefore trust the eternal Spirit which destroys and annihilates only because it is the unfathomable and eternal source of all life. The passion for destruction is a creative passion, too!"[1] The term was made popular by Ivan Turgenev's novel Fathers and Sons (1862). Bazarov, the hero in it, was a nihilist.

Nihilism was the basis of much revolutionary terrorism. It was taken up by Sergei Nechaev, a Russian who wrote a pamphlet that influenced Lenin. Dostoyevsky was a member of a nihilist group in his 20s. He served ten years in exile as a consequence. His novel Devils (or The Possessed) deals with Nechaev. His famous novel Crime and Punishment is also on that theme.

The assassination of the Tsar Alexander II (13 March 1881) by a series of bombs, had long been planned by nihilists. It resulted in the crushing of the nihilist movement.[2]

Read more:

Nihilism - Simple English Wikipedia, the free encyclopedia

Bitcoin Cash is 41.7% Up, Will the Upside Momentum Sustain?

The original Bitcoin Cash (BCH ABC) is having a gala day at the crypto market as it rises 40% and beyond.

The BCH/USD price on Thursday has extended its prevailing bullish bias to register new highs at 182 on Coinbase. On weekly basis, the pair has noted a 182% jump from its so-called bottom near 74, coupled with a relatively higher trading volume across all the leading crypto exchanges.

The BCH/USD downside meanwhile is capped by its 100-period moving average curve, coinciding with 104.27 at this moment. We could see price testing the said level as a part of a minor correction before it resumes the uptrend. The best rally target we could see from here is at 272, which was instrumental to a downtrend on Nov 21.

On the mining front, Bitcoin Cash hash rate has also dropped significantly in the past month. It used to be around an average of 5 Exahash per second (EH/s)but now has reduced to1 EH/s to 850 Petahash per second (PH/s) through this December. It has eventually made mining BCH more profitable than Bitcoin, a reason why the BCH market is leading the overall crypto surge as of now.

For a cryptocurrency that had generated a lot of negative buzz due to its chain upgrade, ultimately crashing harder than any other top crypto asset, Bitcoin Cash undergoing a strong rebound is a welcome change to the market. However, despite the strong bullish sentiments in the near-term, the coin has a lot of hurdles to establish a sustained upside bias.

To begin with, the entire market is still locked inside a long-term downtrend, which also includes Bitcoin Cash. Therefore, a meager jump after a heavy crash could also be a nominal correction, a flag or pennant formation, or a call of short traders to exit their positions. It does not guarantee a full-scale recovery not unless we see some levels broken in the medium-term, at least.

The RSI momentum indicator on the daily chart, for instance, has clearly recovered from its oversold conditions, to enter into overbought territory, but that can take Bitcoin Cash as far as, say, the 50-period moving average level. Bulls would need to do more than that to reinstate investors faith in Bitcoin Cash, as well as the rest of the market.

The latest uptrend is a small step towards an unlikely 100% recovery to anall-time high in the short-term and much of its responsibility is lying on the leading digital currency Bitcoin. For good or worse, Bitcoin is hopeful to pave way for the rest of the crypto market to be accepted for their characteristics in the mainstream. While Bitcoin Cash believes that is technologically better than bitcoin, it would also need to watch its adoption go high in the mainstream to be able to attract parallel investments both from retail and institutional investors.

View post:

Bitcoin Cash is 41.7% Up, Will the Upside Momentum Sustain?

Bitcoin Cash vs. Bitcoin | The Pros and Cons – CoinCentral

The history between Bitcoin and Bitcoin Cash is a contentious one, but were here to look at the advantages and disadvantages of each coin moving forward. Well examine the value proposition of each and their vastly different approaches to scaling. Well also dissect branding and levels of decentralization. Finally, well line up the competition and see where the coins are headed in the near future. Will one coin win out? Or can BTC and BCH exist in harmony in the competitive world of cryptocurrency?

Satoshis message embedded into the first Bitcoin block provides a clear motivation for the creation of a decentralized currency. The Times 03/Jan/2009 Chancellor on brink of second bailout for banks. After the 2008 banking collapse and subsequent bailout with taxpayer money, clearly, Satoshi was fed up with government and banking control of currency.

Satoshi embeds The Times headline into the genesis Bitcoin block.

Its deeply embedded into the cryptocurrency ethos that the mistakes of the greedy are not bailed out. Theres a reason Ethereum Classic exists.

Bitcoins main value proposition in current form is its decentralization, the ability to take monetary supply out of the hands of the governments and banks. With small, 1-megabyte block sizes and massive amounts of hashing power dedicated to securing the Bitcoin network, BTCs level of decentralization and attack resistance is number one amongst all cryptocurrencies. Its no coincidence that Bitcoin also consistently maintains the largest market cap. Obtaining the hardware required for a Bitcoin attack, assuming there was enough supply, would run you at least 7 billion dollars.

However, Satoshi did refer to Bitcoin as electronic cash. During the 2017 spike in Bitcoin popularity, it was clear that Bitcoin in its current form cannot function as cash. Transaction times were slow and expensive, often costing over 20 dollars to transfer money. Simply put, with the current codebase, the main Bitcoin blockchain does not scale. However, Bitcoin believers will gladly wait until there is a feasible second layer solution before ever sacrificing any amount of decentralization on the main blockchain.

Bitcoin Cash hard forked from Bitcoin to increase the block size from 1mb to 8mb, allowing for more transactions in each block. They believe their approach is more closely aligned with the true vision of Satoshi. It is worth noting that after Satoshi secretly implemented the 1mb cap on block sizes, he said: We can phase in a change later if we get closer to needing it. Satoshi predicated that with an increase in internet speed and a decrease in storage cost, the block size could eventually be increased without sacrificing decentralization.

With the combination of larger block sizes and lower demand on the Bitcoin Cash network, people can certainly send transactions more quickly and with significantly lower fees. However, if demand for Bitcoin Cash were to increase, it would eventually run into the exact same problems that Bitcoin had in 2017. To stay ahead of this problem, Bitcoin Cash has already increased block sizes from 8mb to 32mb. If Bitcoin Cash blocks consistently become full, the plan is to increase block sizes once again.

Bitcoin developers do not turn a blind eye to their scaling issue. Segwit was implemented to effectively double the block size. As of July 2018, about 40% of payments are with Segwit and the number continues to climb. The Bitcoin blockchain can now process at most 7 transactions per second, not nearly enough for a global economy. By comparison, Visa can process about 24,000 transactions per second. But Bitcoin users value decentralization above all.

Supporters of Bitcoin want as many people as possible to be able to download the full blockchain to help verify payments. With 1-megabyte blocks added roughly every 10 minutes, the blockchain is already over 200 gigabytes. If the blocks were made bigger, it would quickly become more difficult for people to store the full blockchain on their computer, leading to a loss of decentralization. The larger the blocks, the more bandwidth also required to send out and verify blocks.

The scaling solution currently being pursued involves building a second layer on top of the main blockchain, known as the Lightning Network. The Lightning Network, if properly scaled and implemented, would allow for nearly instant and free Bitcoin transactions. The transactions per second would also completely dwarf that of Visa. The Lightning Network is a work in progress, but growing in size every day. Another advantage of pursuing Lightning is that if it does not work, the main Bitcoin blockchain remains unaffected. And if the Lightning Network fails, there will still be people that hold their Bitcoin and simply wait for the next attempt at scaling.

Bitcoin Cash chose not to implement Segwit, a prerequisite for using the Lightning Network. Instead, they are all in on increasing block sizes to meet demand. When Bitcoin Cash moved to 8mb blocks, it could support 40-90 transactions per second. With Bitcoin Cash blocks now at 32mb, it can support even more.

With current levels of demand, Bitcoin Cash can settle most payments in about 10 minutes with a median fee close to a tenth of a cent. The median fee for Bitcoin is currently around 15 cents. Even with 32mb block sizes, the transaction speed falls far below the requirement for global demand. However, Bitcoin Unlimited developers are already testing the idea of 1gb blocks and there is research into the potential of 1tb blocks.

Its possible that one day technology will reach the point where these block sizes are not an issue. However, today is not that day. By adding 1gb to the blockchain every 10 minutes, the size would quickly grow beyond the storage capacity of most personal desktop computers. Only a small dedicated group with a massive amount of resources would be able to participate in the validation of the blockchain.With current levels of technology, increasing block sizes decreases decentralization in exchange for faster transactions. But with the main value of cryptocurrencies being the decentralization, is this a tradeoff worth making? Clearly, the markets do not think so at this time, with Bitcoins value fairly consistently 10 times that of Bitcoin Cash.

Of course, Bitcoin is far and away the most well-known name in cryptocurrency. The importance of brand namerecognition is invaluable. You dont see popular cryptocurrencies with a unique name all of a sudden change their brand to a common adjective used on a variety of products (sorry Nano fans). Numerous forks using the Bitcoin name are able to achieve large market caps by simply having Bitcoin in the name. This includes Bitcoin Gold, Bitcoin Diamond, Bitcoin Private, and BitcoinDark.

One of the most important aspects of the Bitcoin brand is Satoshi Nakamoto, the mysterious inventor of the cryptocurrency. The disappearance of Satoshi is one of the greatest gifts to Bitcoin. When creating a decentralized currency, a figurehead can potentially have too much influence. Charlie Lee, the inventor of Litecoin, wanting to avoid these potential conflict of interests, sold his Litecoin. Despite intending the move to be positive for the future of Litecoin, he is continuously lambasted by members of the community. So with Bitcoin truly having no figurehead, it only enhances the feeling of decentralization, as mentioned before, the main value of Bitcoin.

In juxtaposition to the mythical and mysterious Satoshi, we have the colorful and controversial leader of Bitcoin Cash, the Bitcoin Jesus, Roger Ver. His past shenanigans have been covered thoroughly so instead we will focus on his relentless and aggressive marketing for Bitcoin Cash.

Bitcoin Jesus Roger Ver

Its extremely likely that Rogers branding of Bitcoin Cash creates confusion for those new to cryptocurrency. For example, the Bitcoin Cash website, owned by Roger, is http://www.bitcoin.com. The Bitcoin Cash Twitter handle, also owned by Roger, is @Bitcoin. The Bitcoin Cash logo is the same as Bitcoins but tilted in a slightly different direction.

Here is Rogers definition of Bitcoin Cash on bitcoin.com:

In addition to Bitcoin Cash defined as the updated version, Roger refers to the original Bitcoin as Bitcoin Core, a name rarely used by any other website.

At one point Bitcoin.com even referred to BCH as Bitcoin and not Bitcoin Cash. Roger eventually changed the website due to the threat of a lawsuit.

So the question is, do people accidentally buy Bitcoin Cash when they meant to buy Bitcoin? Its not clear and anecdotes are not sufficient to answer the question, but it certainly seems possible if not probable. However, Rogers goals are clear. Roger wants Bitcoin to function as a usable currency with low fees. He also wants the world to see Bitcoin Cash as the true Bitcoin.

If both plans for scaling fail, you might continue to see the situation we have today. People trade for coins on less congested networks. In this scenario, its possible that BTC and BCH could continue to exist in harmony. When moving funds between exchanges, it is common for people to trade for Litecoin or Bitcoin Cash. Both coins have sufficient liquidity and can move funds quickly with low fees.

In the search for a decentralized, usable cryptocurrency, it certainly makes sense for both Bitcoin and Bitcoin Cash to exist. Theres no harm in exploring different pathways to an effective scaling of the networks. The race is on to see if the Lightning Network or larger blocks will scale. However, in the event that one coin scales before the other, its possible youd see the winner take all and the death of either BTC or BCH.

Related

Follow this link:

Bitcoin Cash vs. Bitcoin | The Pros and Cons - CoinCentral

Eugenics – Wikipedia, the free encyclopedia

From Wikipedia, the free encyclopedia

Eugenics is a social philosophy which advocates the improvement of human hereditary traits through various forms of intervention.[1] The purported goals have variously been to create healthier, more intelligent people, save society's resources, and lessen human suffering. Earlier proposed means of achieving these goals focused on selective breeding, while modern ones focus on prenatal testing and screening, genetic counseling, birth control, in vitro fertilization, and genetic engineering. Opponents argue that eugenics is immoral and is based on, or is itself, pseudoscience. Historically, eugenics has been used as a justification for coercive state-sponsored discrimination and human rights violations, such as forced sterilization of persons with genetic defects, the killing of the institutionalized and, in some cases, genocide of races perceived as inferior.

Selective breeding of human beings was suggested at least as far back as Plato, but the modern field, and term, was first formulated by Sir Francis Galton in 1865, drawing on the recent work of his cousin Charles Darwin. From its inception eugenics was supported by prominent thinkers, including Alexander Graham Bell, George Bernard Shaw, and Winston Churchill. Eugenics was an academic discipline at many colleges and universities. Its scientific reputation started to tumble in the 1930s, a time when Ernst Rdin began incorporating eugenic rhetoric into the racial policies of Nazi Germany. After the postwar period, both the public and the scientific community generally associated eugenics with Nazi abuses, which included enforced racial hygiene, human experimentation, and the extermination of undesired population groups.

Developments in genetic, genomic, and reproductive technologies at the end of the 20th century, however, have raised many new ethical questions and concerns about what exactly constitutes the meaning of eugenics and what its ethical and moral status is.

The word eugenics etymologically derives from the Greek words eu (good) and gen (birth), and was coined by Francis Galton in 1883.

Eugenics has, from the very beginning, meant many different things to many different people. Historically, the term has been used to cover everything from prenatal care for mothers to forced sterilization and euthanasia. Much debate took place in the past, and takes place today, as to what exactly counts as eugenics.[2] Some types of eugenics, such as race-based eugenics and class-based eugenics, are sometimes called 'pseudo-eugenics' by proponents of strict eugenics that deals only with beneficial and detrimental intrinsic traits.

The term eugenics is often used to refer to movements and social policies that were influential during the early 20th century. In a historical and broader sense, eugenics can also be a study of "improving human genetic qualities". It is sometimes broadly applied to describe any human action whose goal is to improve the gene pool. Some forms of infanticide in ancient societies, present-day reprogenetics, preemptive abortions and designer babies have been (sometimes controversially) referred to as eugenic.

Because of its normative goals and historical association with scientific racism, as well as the development of the science of genetics, the western scientific community has mostly disassociated itself from the term "eugenics", although one can find advocates of what is now known as liberal eugenics. Ideological social determinists, some of which have obtained college degrees in fields relevant to eugenics, often describe eugenics as a pseudoscience. Modern inquiries into the potential use of genetic engineering have led to an increased invocation of the history of eugenics in discussions of bioethics, most often as a cautionary tale. Some ethicists suggest that even non-coercive eugenics programs would be inherently unethical, though this view has been challenged by such thinkers as Nicholas Agar.[3]

Eugenicists advocate specific policies that (if successful) would lead to a perceived improvement of the human gene pool. Since defining what improvements are desired or beneficial is by many perceived as a cultural choice rather than a matter that can be determined objectively (e.g., by empirical, scientific inquiry), eugenics has often been deemed a pseudoscience. The most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. This aspect of eugenics has historically been tainted with scientific racism.

Early eugenicists were mostly concerned with perceived intelligence factors that often correlated strongly with social class. Many eugenicists took inspiration from the selective breeding of animals (where purebreds are often strived for) as their analogy for improving human society. The mixing of races (or miscegenation) was usually considered as something to be avoided in the name of racial purity. At the time this concept appeared to have some scientific support, and it remained a contentious issue until the advanced development of genetics led to a scientific consensus that the division of the human species into unequal races is unjustifiable. Some see this as an ideological consensus, since equality, just like inequality, is a cultural choice rather than a matter that can be determined objectively.

Eugenics has also been concerned with the elimination of hereditary diseases such as haemophilia and Huntington's disease. However, there are several problems with labeling certain factors as "genetic defects":

Similar concerns have been raised when a prenatal diagnosis of a congenital disorder leads to abortion (see also preimplantation genetic diagnosis).

Eugenic policies have been conceptually divided into two categories: positive eugenics, which encourage a designated "most fit" to reproduce more often; and negative eugenics, which discourage or prevent a designated "less fit" from reproducing. Negative eugenics need not be coercive. A state might offer financial rewards to certain people who submit to sterilization, although some critics might reply that this incentive along with social pressure could be perceived as coercion. Positive eugenics can also be coercive. Abortion by "fit" women was illegal in Nazi Germany.

During the 20th century, many countries enacted various eugenics policies and programs, including:

Most of these policies were later regarded as coercive, restrictive, or genocidal, and now few jurisdictions implement policies that are explicitly labeled as eugenic or unequivocally eugenic in substance (however labeled). However, some private organizations assist people in genetic counseling, and reprogenetics may be considered as a form of non-state-enforced "liberal" eugenics.

There are 3 main ways by which the methods of eugenics can be applied. They are:

There are also different goals of eugenics [4]. They are:

Selective breeding was suggested at least as far back as Plato, who believed human reproduction should be controlled by government. He recorded these ideals in The Republic: "The best men must have intercourse with the best women as frequently as possible, and the opposite is true of the very inferior." Plato proposed that the process be concealed from the public via a form of lottery. Other ancient examples include the polis of Sparta's purported practice of leaving weak babies outside of city borders to die. However, they would leave all babies outside for a length of time, and the survivors were considered stronger, while many "weaker" babies perished.

During the 1860s and 1870s, Sir Francis Galton systematized these ideas and practices according to new knowledge about the evolution of man and animals provided by the theory of his cousin Charles Darwin. After reading Darwin's Origin of Species, Galton noticed an interpretation of Darwin's work whereby the mechanisms of natural selection were potentially thwarted by human civilization. He reasoned that, since many human societies sought to protect the underprivileged and weak, those societies were at odds with the natural selection responsible for extinction of the weakest. Only by changing these social policies, Galton thought, could society be saved from a "reversion towards mediocrity", a phrase that he first coined in statistics and which later changed to the now common "regression towards the mean".[5]

Galton first sketched out his theory in the 1865 article "Hereditary Talent and Character", then elaborated it further in his 1869 book Hereditary Genius.[6] He began by studying the way in which human intellectual, moral, and personality traits tended to run in families. Galton's basic argument was that "genius" and "talent" were hereditary traits in humans (although neither he nor Darwin yet had a working model of this type of heredity). He concluded that, since one could use artificial selection to exaggerate traits in other animals, one could expect similar results when applying such models to humans. As he wrote in the introduction to Hereditary Genius:

According to Galton, society already encouraged dysgenic conditions, claiming that the less intelligent were out-reproducing the more intelligent. Galton did not propose any selection methods; rather, he hoped that a solution would be found if social mores changed in a way that encouraged people to see the importance of breeding.

Galton first used the word eugenic in his 1883 Inquiries into Human Faculty and Its Development, a book in which he meant "to touch on various topics more or less connected with that of the cultivation of race, or, as we might call it, with 'eugenic' questions." He included a footnote to the word "eugenic" which read:

In 1904 he clarified his definition of eugenics as "the science which deals with all influences that improve the inborn qualities of a race; also with those that develop them to the utmost advantage."[9]

Galton's formulation of eugenics was based on a strong statistical approach, influenced heavily by Adolphe Quetelet's "social physics". Unlike Quetelet, however, Galton did not exalt the "average man" but decried him as mediocre. Galton and his statistical heir Karl Pearson developed what was called the biometrical approach to eugenics, which developed new and complex statistical models (later exported to wholly different fields) to describe the heredity of traits. However, with the rediscovery of Gregor Mendel's hereditary laws, two separate camps of eugenics advocates emerged. One was made up of statisticians, the other of biologists. Statisticians thought the biologists had exceptionally crude mathematical models, while biologists thought the statisticians knew little about biology.[10]

Eugenics eventually referred to human selective reproduction with an intent to create children with desirable traits, generally through the approach of influencing differential birth rates. These policies were mostly divided into two categories: positive eugenics, the increased reproduction of those seen to have advantageous hereditary traits; and negative eugenics, the discouragement of reproduction by those with hereditary traits perceived as poor. Negative eugenic policies in the past have ranged from attempts at segregation to sterilization and even genocide. Positive eugenic policies have typically taken the form of awards or bonuses for "fit" parents who have another child. Relatively innocuous practices like marriage counseling had early links with eugenic ideology.

Eugenics differed from what would later be known as Social Darwinism. While both claimed intelligence was hereditary, eugenics asserted that new policies were needed to actively change the status quo towards a more "eugenic" state, while the Social Darwinists argued society itself would naturally "check" the problem of "dysgenics" if no welfare policies were in place (for example, the poor might reproduce more but would have higher mortality rates).

State policies in some Latin American countries advocated the whitening of society by increased European immigration and the eradication of indigenous populations[citationneeded]. This can be seen particularly in Argentina and Brazil; in these countries this process is known as blanqueamiento and branqueamento, respectively.

One of the earliest modern advocates of eugenic ideas (before they were labeled as such) was Alexander Graham Bell. In 1881 Bell investigated the rate of deafness on Martha's Vineyard, Massachusetts. From this he concluded that deafness was hereditary in nature and recommended a marriage prohibition against the deaf ("Memoir upon the formation of a deaf variety of the human Race") even though he was married to a deaf woman. Like many other early eugenicists, he proposed controlling immigration for the purpose of eugenics and warned that boarding schools for the deaf could possibly be considered as breeding places of a deaf human race.

Though eugenics is today often associated with racism, it was not always so; both W.E.B. DuBois and Marcus Garvey supported eugenics or ideas resembling eugenics as a way to reduce African American suffering and improve their stature.[citationneeded] Many legal methods of eugenics include state laws against miscegenation or prohibitions of interracial marriage. The U.S. Supreme Court overturned those state laws in 1967 and declared antimiscegenation laws unconstitutional.

Nazi Germany under Adolf Hitler was infamous for eugenics programs which attempted to maintain a "pure" German race through a series of programs that ran under the banner of "racial hygiene". Among other activities, the Nazis performed extensive experimentation on live human beings to test their genetic theories, ranging from simple measurement of physical characteristics to the horrific experiments carried out by Josef Mengele for Otmar von Verschuer on twins in the concentration camps. During the 1930s and 1940s, the Nazi regime forcibly sterilized hundreds of thousands of people whom they viewed as mentally and physically "unfit", an estimated 400,000 between 1934 and 1937. The scale of the Nazi program prompted American eugenics advocates to seek an expansion of their program, with one complaining that "the Germans are beating us at our own game".[11] The Nazis went further, however, killing tens of thousands of the institutionalized disabled through compulsory "euthanasia" programs.[12]

They also implemented a number of "positive" eugenics policies, giving awards to "Aryan" women who had large numbers of children and encouraged a service in which "racially pure" single women were impregnated by SS officers (Lebensborn). Many of their concerns for eugenics and racial hygiene were also explicitly present in their systematic killing of millions of "undesirable" people including Jews, gypsies, Jehovah's Witnesses and homosexuals during the Holocaust (much of the killing equipment and methods employed in the death camps were first developed in the euthanasia program). The scope and coercion involved in the German eugenics programs along with a strong use of the rhetoric of eugenics and so-called "racial science" throughout the regime created an indelible cultural association between eugenics and the Third Reich in the postwar years.[13]

The second largest eugenics movement was in the United States. Beginning with Connecticut in 1896, many states enacted marriage laws with eugenic criteria, prohibiting anyone who was "epileptic, imbecile or feeble-minded" from marrying. In 1898 Charles B. Davenport, a prominent American biologist, began as director of a biological research station based in Cold Spring Harbor where he experimented with evolution in plants and animals. In 1904 Davenport received funds from the Carnegie Institution to found the Station for Experimental Evolution. The Eugenics Record Office opened in 1910 while Davenport and Harry H. Laughlin began to promote eugenics.[14]

During the 20th century, researchers became interested in the idea that mental illness could run in families and conducted a number of studies to document the heritability of such illnesses as schizophrenia, bipolar disorder, and depression. Their findings were used by the eugenics movement as proof for its cause. State laws were written in the late 1800s and early 1900s to prohibit marriage and force sterilization of the mentally ill in order to prevent the "passing on" of mental illness to the next generation. These laws were upheld by the U.S. Supreme Court in 1927 and were not abolished until the mid-20th century. By 1945 over 45,000 mentally ill individuals in the United States had been forcibly sterilized.

In years to come, the ERO collected a mass of family pedigrees and concluded that those who were unfit came from economically and socially poor backgrounds. Eugenicists such as Davenport, the psychologist Henry H. Goddard and the conservationist Madison Grant (all well respected in their time) began to lobby for various solutions to the problem of the "unfit". (Davenport favored immigration restriction and sterilization as primary methods; Goddard favored segregation in his The Kallikak Family; Grant favored all of the above and more, even entertaining the idea of extermination.)[15] Though their methodology and research methods are now understood as highly flawed, at the time this was seen as legitimate scientific research. [citationneeded] It did, however, have scientific detractors (notably, Thomas Hunt Morgan, one of the few Mendelians to explicitly criticize eugenics), though most of these focused more on what they considered the crude methodology of eugenicists, and the characterization of almost every human characteristic as being hereditary, rather than the idea of eugenics itself.[16]

The idea of "genius" and "talent" is also considered by William Graham Sumner, a founder of the American Sociological Society (now called the American Sociological Association). He maintained that if the government did not meddle with the social policy of laissez-faire, a class of genius would rise to the top of the system of social stratification, followed by a class of talent. Most of the rest of society would fit into the class of mediocrity. Those who were considered to be defective (mentally retarded, handicapped, etc.) had a negative effect on social progress by draining off necessary resources. They should be left on their own to sink or swim. But those in the class of delinquent (criminals, deviants, etc.) should be eliminated from society ("Folkways", 1907).

With the passage of the Immigration Act of 1924, eugenicists for the first time played a central role in the Congressional debate as expert advisers on the threat of "inferior stock" from eastern and southern Europe. This reduced the number of immigrants from abroad to 15 percent from previous years, to control the number of "unfit" individuals entering the country. The new act strengthened existing laws prohibiting race mixing in an attempt to maintain the gene pool.[17] Eugenic considerations also lay behind the adoption of incest laws in much of the U.S. and were used to justify many antimiscegenation laws.[18]

Some states sterilized "imbeciles" for much of the 20th century. The U.S. Supreme Court ruled in the 1927 Buck v. Bell case that the state of Virginia could sterilize those it thought unfit. The most significant era of eugenic sterilization was between 1907 and 1963, when over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States.[19] A favorable report on the results of sterilization in California, by far the state with the most sterilizations, was published in book form by the biologist Paul Popenoe and was widely cited by the Nazi government as evidence that wide-reaching sterilization programs were feasible and humane. When Nazi administrators went on trial for war crimes in Nuremberg after World War II, they justified the mass sterilizations (over 450,000 in less than a decade) by citing the United States as their inspiration.[20]

Almost all non-Catholic Western nations adopted some eugenic legislations. In July 1933 Germany passed a law allowing for the involuntary sterilization of "hereditary and incurable drunkards, sexual criminals, lunatics, and those suffering from an incurable disease which would be passed on to their offspring."[21] Canada carried out thousands of forced sterilizations, and these lasted into the 1970s. Many First Nations (native Canadians) were targeted, as well as immigrants from Eastern Europe, as the program identified racial and ethnic minorities to be genetically inferior. Sweden forcibly sterilized 62,000 people, primarily the mentally ill in the later decades, but also ethnic or racial minorities early on, as part of a eugenics program over a 40-year period. As was the case in other programs, ethnicity and race were believed to be connected to mental and physical health. While many Swedes disliked the program, politicians generally supported it; the ruling left supported it more as a means of promoting social health, while amongst the right it was more about racial protectionism. (The Swedish government has subsequently paid damages to those involved.) Besides the large-scale program in the United States, other nations included Australia, the UK, Norway, France, Finland, Denmark, Estonia, Iceland, and Switzerland with programs to sterilize people the government declared to be mentally deficient. Singapore practiced a limited form of eugenics that involved encouraging marriage between university graduates and the rest through segregation in matchmaking agencies, in the hope that the former would produce better children.[22]

Various authors, notably Stephen Jay Gould, have repeatedly asserted that restrictions on immigration passed in the United States during the 1920s (and overhauled in 1965) were motivated by the goals of eugenics, in particular, a desire to exclude races considered to be inferior from the national gene pool. During the early 20th century, the United States and Canada began to receive far higher numbers of Southern and Eastern European immigrants. Influential eugenicists like Lothrop Stoddard and Harry Laughlin (who was appointed as an expert witness for the House Committee on Immigration and Naturalization in 1920) presented arguments that these were inferior races that would pollute the national gene pool if their numbers went unrestricted. It has been argued that this stirred both Canada and the United States into passing laws creating a hierarchy of nationalities, rating them from the most desirable Anglo-Saxon and Nordic peoples to the Chinese and Japanese immigrants, who were almost completely banned from entering the country.[23] However, several people, in particular Franz Samelson, Mark Snyderman and Richard Herrnstein, have argued that, based on their examination of the records of the congressional debates over immigration policy, congress gave virtually no consideration to these factors. According to these authors, the restrictions were motivated primarily by a desire to maintain the country's cultural integrity against a heavy influx of foreigners.[24] This interpretation is not, however, accepted by most historians of eugenics.

Some who disagree with the idea of eugenics in general contend that eugenics legislation still had benefits. Margaret Sanger (founder of Planned Parenthood of America) found it a useful tool to urge the legalization of contraception. In its time eugenics was seen by many as scientific and progressive, the natural application of knowledge about breeding to the arena of human life. Before the death camps of World War II, the idea that eugenics could lead to genocide was not taken seriously.

After the experience of Nazi Germany, many ideas about "racial hygiene" and "unfit" members of society were publicly renounced by politicians and members of the scientific community. The Nuremberg Trials against former Nazi leaders revealed to the world many of the regime's genocidal practices and resulted in formalized policies of medical ethics and the 1950 UNESCO statement on race. Many scientific societies released their own similar "race statements" over the years, and the Universal Declaration of Human Rights, developed in response to abuses during the Second World War, was adopted by the United Nations in 1948 and affirmed, "Men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family."[25] In continuation, the 1978 UNESCO declaration on race and racial prejudice states that the fundamental equality of all human beings is the ideal toward which ethics and science should converge.[26]

In reaction to Nazi abuses, eugenics became almost universally reviled in many of the nations where it had once been popular (however, some eugenics programs, including sterilization, continued quietly for decades). Many prewar eugenicists engaged in what they later labeled "crypto-eugenics", purposefully taking their eugenic beliefs "underground" and becoming respected anthropologists, biologists and geneticists in the postwar world (including Robert Yerkes in the U.S. and Otmar von Verschuer in Germany). Californian eugenicist Paul Popenoe founded marriage counseling during the 1950s, a career change which grew from his eugenic interests in promoting "healthy marriages" between "fit" couples.[27]

High school and college textbooks from the 1920s through the '40s often had chapters touting the scientific progress to be had from applying eugenic principles to the population. Many early scientific journals devoted to heredity in general were run by eugenicists and featured eugenics articles alongside studies of heredity in nonhuman organisms. After eugenics fell out of scientific favor, most references to eugenics were removed from textbooks and subsequent editions of relevant journals. Even the names of some journals changed to reflect new attitudes. For example, Eugenics Quarterly became Social Biology in 1969 (the journal still exists today, though it looks little like its predecessor). Notable members of the American Eugenics Society (192294) during the second half of the 20th century included Joseph Fletcher, originator of Situational ethics; Dr. Clarence Gamble of the Procter & Gamble fortune; and Garrett Hardin, a population control advocate and author of The Tragedy of the Commons.

Despite the changed postwar attitude towards eugenics in the U.S. and some European countries, a few nations, notably, Canada and Sweden, maintained large-scale eugenics programs, including forced sterilization of mentally handicapped individuals, as well as other practices, until the 1970s. In the United States, sterilizations capped off in the 1960s, though the eugenics movement had largely lost most popular and political support by the end of the 1930s.[28]

Beginning in the 1980s, the history and concept of eugenics were widely discussed as knowledge about genetics advanced significantly. Endeavors such as the Human Genome Project made the effective modification of the human species seem possible again (as did Darwin's initial theory of evolution in the 1860s, along with the rediscovery of Mendel's laws in the early 20th century). The difference at the beginning of the 21st century was the guarded attitude towards eugenics, which had become a watchword to be feared rather than embraced.

A few scientific researchers such as psychologist Richard Lynn, psychologist Raymond Cattell, and doctor Gregory Stock have openly called for eugenic policies using modern technology, but they represent a minority opinion in current scientific and cultural circles.[29]

Because of its association with compulsory sterilization and the racial ideals of the Nazi Party, the word eugenics is rarely used by the advocates of such programs.

Only a few governments in the world have anything resembling eugenic programs today, the most notable being China. In 1993, the Chinese government announced a law, "On Eugenics and Health Protection," designed to "avoid new births of inferior quality and heighten the standards of the whole population."[2] In 1994 they passed the "Maternal and Infant Health Care Law", which included mandatory premarital screenings for "genetic diseases of a serious nature" and "relevant mental disease". Those who were diagnosed with such diseases were required either not to marry, agree to "long-term contraceptive measures" or to submit to sterilization. (See also: One-child policy)

A similar screening policy (including prenatal screening and abortion) intended to reduce the incidence of thalassemia exists on both sides of the island of Cyprus. Since the program's implementation in the 1970s, it has reduced the ratio of children born with the hereditary blood disease from 1 out of every 158 births to almost zero.

Dor Yeshorim, a program which seeks to reduce the incidence of Tay-Sachs disease, Cystic Fibrosis, Canavan disease, Fanconi anemia, Familial Dysautonomia, Glycogen storage disease, Bloom's Syndrome, Gaucher Disease, Niemann-Pick Disease, and Mucolipidosis IV among certain Jewish communities, is another screening program which has drawn comparisons with liberal eugenics. [3] In Israel, at the expense of the state, the general public is advised to carry out genetic tests to diagnose these diseases before the birth of a baby. If an unborn baby is diagnosed with one of these diseases among which Tay-Sachs is the most commonly known, the pregnancy may be terminated, subject to consent. Most other Ashkenazi Jewish communities also run screening programs because of the higher incidence of genetic diseases. In some Jewish communities, the ancient custom of matchmaking (shidduch) is still practiced, and in order to attempt to prevent the tragedy of infant death which always results from being homozygous for Tay-Sachs, associations such as the strongly observant Dor Yeshorim (which was founded by a rabbi who lost four children to Tay-Sachs in order to prevent others suffering the same tragedy) test young couples to check whether they carry a risk of passing on fatal conditions. If both the young man and woman are Tay-Sachs carriers, it is common for the match to be broken off. Judaism, like numerous other religions, discourages abortion unless there is a risk to the mother, in which case her needs take precedence. The effort is not aimed at eradicating the hereditary traits, but rather at the occurrence of homozygosity. The actual impact of this program on gene frequencies is unknown.

In modern bioethics literature, the history of eugenics presents many moral and ethical questions. Commentators have suggested the new "eugenics" will come from reproductive technologies that will allow parents to create so-called "designer babies" (what the biologist Lee M. Silver prominently called "reprogenetics"). It has been argued that this "non-coercive" form of biological "improvement" will be predominantly motivated by individual competitiveness and the desire to create "the best opportunities" for children, rather than an urge to improve the species as a whole, which characterized the early 20th-century forms of eugenics. Because of this non-coercive nature, lack of involvement by the state and a difference in goals, some commentators have questioned whether such activities are eugenics or something else altogether. But critics note [citationneeded] that Francis Galton, did not advocate for coercion when he defined the principles of eugenics. In other words, eugenics does not mean coercion. It is, according to Galton who originated the term, the proper label for bioengineering of "better" human beings.

Daniel Kevles argues that eugenics and the conservation of natural resources are similar propositions. Both can be practiced foolishly so as to abuse individual rights, but both can be practiced wisely.

Some disability activists argue that, although their impairments may cause them pain or discomfort, what really disables them as members of society is a sociocultural system that does not recognize their right to genuinely equal treatment. They express skepticism that any form of eugenics could be to the benefit of the disabled considering their treatment by historical eugenic campaigns.

James D. Watson, the first director of the Human Genome Project, initiated the Ethical, Legal and Social Implications Program (ELSI) which has funded a number of studies into the implications of human genetic engineering (along with a prominent website on the history of eugenics), because:

Distinguished geneticists including Nobel Prize-winners John Sulston ("I don't think one ought to bring a clearly disabled child into the world")[31] and Watson ("Once you have a way in which you can improve our children, no one can stop it")[32] support genetic screening. Which ideas should be described as "eugenic" are still controversial in both public and scholarly spheres. Some observers such as Philip Kitcher have described the use of genetic screening by parents as making possible a form of "voluntary" eugenics.[33]

Some modern subcultures advocate different forms of eugenics assisted by human cloning and human genetic engineering, sometimes even as part of a new cult (see Ralism, Cosmotheism, or Prometheism). These groups also talk of "neo-eugenics". "conscious evolution", or "genetic freedom".

Behavioral traits often identified as potential targets for modification through human genetic engineering include intelligence, depression, schizophrenia, alcoholism, sexual behavior (and orientation) and criminality.

Most recently in the United Kingdom, a court case, the Crown v. James Edward Whittaker-Williams, arguably set a precedent of banning sexual contact between people with "learning difficulties". The accused, a man suffering learning disabilities, was jailed for kissing and hugging a woman with learning disabilities. This was done under the 2003 Sexual Offences Act, which redefines kissing and cuddling as sexual and states that those with learning difficulties are unable to give consent regardless of whether or not the act involved coercion. Opponents of the act have attacked it as bringing in eugenics through the backdoor under the guise of a requirement of "consent".[34]

A current line of thought correlates the legalization of abortion in the USA with the current drop in crime rate and thus is an unintended eugenics experiment on a grand scale. The book Freakonomics expounds this theory. The chain of reasoning is that the typical person getting an abortion in the USA comes from lower economic classes and have an unwanted pregnancy due to poor choices and risky behavior. Such attributes are passed on through genetics, thus abortion is selecting strongly against such genes. This results in a drop in criminals being produced. See [4] and [5] for typical articles. There may also be a similar selection against high intelligence genes due to women with such genes postponing reproduction due to career conflict.

While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology there is at this point no agreed objective means of determining which traits might be ultimately desirable or undesirable. Eugenic manipulations that reduce the propensity for criminality and violence, for example, might result in the population being enslaved by an outside aggressor it can no longer defend itself against. On the other hand, genetic diseases like hemochromatosis can increase susceptibility to illness, cause physical deformities, and other dysfunctions. Eugenic measures against many of these diseases are already being undertaken in societies around the world, while measures against traits that affect more subtle, poorly understood traits, such as criminality, are relegated to the realm of speculation and science fiction. The effects of diseases are essentially wholly negative, and societies everywhere seek to reduce their impact by various means, some of which are eugenic in all but name. The other traits that are discussed have positive as well as negative effects and are not generally targeted at present anywhere.[citationneeded]

A common criticism of eugenics is that it inevitably leads to measures that are unethical (Lynn 2001). In the hypothetical scenario where it's scientifically proven that one racial minority group making up 5% of the population is on average less intelligent than the majority racial group it's more likely that the minority racial group will be submitted to a eugenics program, opposed to the 5% least intelligent members of the population as a whole. For example, Nazi Germany's eugenic program within the German population resulted in protests and unrest, while the persecution of the Jews was met with silence.

H. L. Kaye wrote of "the obvious truth that eugenics has been discredited by Hitler's crimes" (Kaye 1989). R. L. Hayman argued "the eugenics movement is an anachronism, its political implications exposed by the Holocaust" (Hayman 1990).

Steven Pinker has stated that it is "a conventional wisdom among left-leaning academics that genes imply genocide." He has responded to this "conventional wisdom" by comparing the history of Marxism, which had the opposite position on genes to that of Nazism:

But the 20th century suffered "two" ideologies that led to genocides. The other one, Marxism, had no use for race, didn't believe in genes and denied that human nature was a meaningful concept. Clearly, it's not an emphasis on genes or evolution that is dangerous. It's the desire to remake humanity by coercive means (eugenics or social engineering) and the belief that humanity advances through a struggle in which superior groups (race or classes) triumph over inferior ones.[35]

Richard Lynn argues that any social philosophy is capable of ethical misuse. Though Christian principles have aided in the abolition of slavery and the establishment of welfare programs, he notes that the Christian church has also burned many dissidents at the stake and waged wars against nonbelievers in which Christian crusaders slaughtered large numbers of women and children. Lynn argues the appropriate response is to condemn these killings, but believing that Christianity "inevitably leads to the extermination of those who do not accept its doctrines" is unwarranted (Lynn 2001).

Eugenic policies could also lead to loss of genetic diversity, in which case a culturally accepted improvement of the gene pool may, but would not necessarily, result in biological disaster due to increased vulnerability to disease, reduced ability to adapt to environmental change and other factors both known and unknown. This kind of argument from the precautionary principle is itself widely criticized. A long-term eugenics plan is likely to lead to a scenario similar to this because the elimination of traits deemed undesirable would reduce genetic diversity by definition.

Related to a decrease in diversity is the danger of non-recognition. That is, if everyone were beautiful and attractive, then it would be more difficult to distinguish between different individuals, due to the wide variety of ugly traits and otherwise non-attractive traits and combinations thereof that we use to recognize each other.

To the contrary, some studies have shown that dysgenic trends lead to a decrease of genetic diversity, a development that in theory could be countered by a eugenic program.[citationneeded]

The possible elimination of the autism genotype is a significant political issue in the autism rights movement, which claims autism is a form of neurodiversity. Many advocates of Down Syndrome rights also consider Down Syndrome (Trisomy-21) a form of neurodiversity, though males with Down Syndrome are generally infertile.

In some instances efforts to eradicate certain single-gene mutations would be nearly impossible. In the event the condition in question was a heterozygous recessive trait, the problem is that by eliminating the visible unwanted trait, there are still as many genes for the condition left in the gene pool as were eliminated according to the Hardy-Weinberg principle, which states that a population's genetics are defined as pp+2pq+qq at equilibrium. With genetic testing it may be possible to detect all of the heterozygous recessive traits, but only at great cost with the current technology. Under normal circumstances it is only possible to eliminate a dominant allele from the gene pool. Recessive traits can be severely reduced, but never eliminated unless the complete genetic makeup of all members of the pool was known, as aforementioned. As only very few undesirable traits, such as Huntington's disease, are dominant, the practical value for "eliminating" traits is quite low.

One website on logic has used the statement "Eugenics must be wrong because it was associated with the Nazis" as a typical example of the association fallacy known as a Reductio ad Hitlerum.[36] The stigmatization of eugenics because of its association, on the other hand, has not at all slowed the application of medical technologies that decrease the incidence of birth defects, or to slow the search for their causes.

Supporters of eugenics are often concerned about the dysgenic decline in intelligence which they believe will lead to the collapse of the current civilization and has been the cause of the collapse of past civilizations. This decline would make eugenics a necessary evil because the possible human suffering caused by eugenics would pale in comparison to such an event.

Small differences in average IQ at the group level might theoretically have large effects on social outcomes. Herrnstein and Murray altered the mean IQ (100) of the U.S. National Longitudinal Survey of Youth's population sample by randomly deleting individuals below an IQ of 103 until the population mean reached 103. This calculation was conducted twice and averaged together to avoid error from the random selection. This test showed that the new group with an average IQ of 103 had a poverty rate 25% lower than a group with an average IQ of 100. Similar substantial correlations in high school drop-out rates, crime rates, and other outcomes were measured.

Whether a global increase in intelligence would actually increase a nation's wealth remains disputed since IQ is partially correlated to one's socialeconomic status, which might not change at all.

Eugenics is a recurrent theme in science fiction, often with both dystopian and utopian elements. The novel Brave New World (1932) by Aldous Huxley is usually taken as an expression of the fear that the control of human biology by the state might result in permanent social stratefication, a theme which also plays a role in the 1997 film Gattaca, whose plot turns around reprogenetics, genetic testing, and the social consequences of liberal eugenics. Boris Vian (under the pseudonym Vernon Sullivan) takes a more light-hearted approach in his novel Et on tuera tous les affreux ("And we'll kill all the ugly ones").

Novels touching upon the subject include The Gate to Women's Country by Sheri S. Tepper and That Hideous Strength by C.S. Lewis. The Eugenics Wars are a significant part of the background story of the Star Trek universe (episodes "Space Seed", "Borderland", "Cold Station 12", "The Augments" and the film The Wrath of Khan). Eugenics also plays a significant role in the Neanderthal Parallax trilogy where eugenics practicing Neanderthals from a near-utopian parallel world create a gateway to earth. Cowl (novel) by Neal Asher describes the collapse of western civilization due to dysgenics.

In Frank Herbert's Dune series of novels, selective breeding programs form a significant theme. Early in the series, the Bene Gesserit religious order manipulates breeding patterns over many generations in order to create the Kwisatz Haderach. In God Emperor of Dune, the emperor Leto II again manipulates human breeding in order to achieve his own ends. The Bene Tleilaxu also employed genetic engineering to create human beings with specific genetic attributes.

There tends to be a eugenic undercurrent in the science fiction concept of the supersoldier. Several depictions of these supersoldiers usually have them bred for combat or genetically selected for attributes that are beneficial to modern or future combat.

In the novels Methuselah's Children and Time Enough for Love by Robert A. Heinlein, a large trust fund is created to give financial encouragement to marriage among people (the Howard Families) whose parents and grandparents were long lived. The result is a subset of Earth's population who has significantly above-average life spans. Members of this group appear in many of the works by the same author.

In Eoin Colfer's book the The Supernaturalist, Ditto is a Bartoli Baby, which is the name for a failed experiment of the famed Dr. Bartoli. Bartoli tried to create a superior race of humans, but they ended in arrested development, with mutations including extra sensory perception and healing hands.

In Gene Roddenberry's science-fiction television series Andromeda, the entire Nietzschean race is founded on the principals of selective breeding.

In Larry Niven's Ringworld series, the character Teela Brown is a result of several generations of winners of the "Birthright Lottery", a system which attempts to encourage lucky people to breed.

In season 2 of Dark Angel, the main 'bad guy' Ames White is a member of a cult known as the Conclave which has infiltrated various levels of society to breed super-humans. They are trying to exterminate all the Transgenics, including the main character Max Guevara, whom they view as being genetically unclean for having some animal DNA spliced with human.

In the video game Grand Theft Auto: Vice City a fictional character called Pastor Richards, who is a caricature of an extreme and insane televangelist, is featured as a guest on a discussion radio show about morality. On this show he describes shooting people who do not agree with him and who are not "morally correct", the show's host describes this as "amateur eugenics".

See also Genetic engineering in fiction.

See more here:

Eugenics - Wikipedia, the free encyclopedia

Eugenics | Psychology Wiki | FANDOM powered by Wikia

Assessment | Biopsychology | Comparative |Cognitive | Developmental | Language | Individual differences |Personality | Philosophy | Social |Methods | Statistics |Clinical | Educational | Industrial |Professional items |World psychology |

Biological:Behavioural genetics Evolutionary psychology Neuroanatomy Neurochemistry Neuroendocrinology Neuroscience Psychoneuroimmunology Physiological Psychology Psychopharmacology(Index, Outline)

"Eugenics is the self-direction of human evolution": Logo from the Second International Congress of Eugenics, 1921, depicting it as a tree which unites a variety of different fields.

Eugenics is a social philosophy which advocates the improvement of human hereditary traits through various forms of intervention. The purported goals have variously been to create healthier, more intelligent people, save society's resources, and lessen human suffering. Earlier proposed means of achieving these goals focused on selective breeding while modern ones focus on prenatal testing and fetal screening genetic counseling, birth control, in vitro fertilization, and genetic engineering. Critics argue that eugenics was and still is a pseudoscience. Historically, eugenics has been used as a justification for coercive state-sponsored discrimination and severe human rights violations, such as forced sterilization (e.g., of those perceived to have mental or social defects) and even genocide.

Selective breeding of human beings was suggested at least as far back as Plato, but the modern field was first formulated by Sir Francis Galton in 1865, drawing on the recent work of his cousin, Charles Darwin. From its inception, eugenics (derived from the Greek "well born" or "good breeding") was supported by prominent thinkers, including Alexander Graham Bell, George Bernard Shaw, and Winston Churchill]], and was an academic discipline at many colleges and universities. Its scientific reputation tumbled in the 1930s, a time when Ernst Rdin began incorporating eugenic rhetoric into the racial policy of Nazi Germany During the postwar period both the public and the scientific community largely associated eugenics with Nazi abuses, which included enforced "racial hygiene" and extermination, although a variety of regional and national governments maintained eugenic programs until the 1970s.

Definitions of the term vary. The term eugenics is often used to refer to a movement and social policy that was influential during the first half of the twentieth century. In an historical and broader sense, eugenics can also be a study of "improving human genetic qualities". It is sometimes more broadly applied to describe any human action whose goal is to improve the gene pool. Some forms of infanticide in ancient societies, present-day reprogenetics, pre-emptive abortions and designer babies have been (sometimes controversially) referred to as eugenics.

Because of its normative goals and historical association with scientific racism, as well as the development of the science of genetics, the international scientific community has mostly disassociated itself from the term "eugenics", sometimes referring to it as a pseudo-science, although one can find advocates of what is now known as liberal eugenics. Modern inquiries into the potential use of genetic engineering have led to an increased invocation of the history of eugenics in discussions of bioethics, most often as a cautionary tale. Some ethicists suggest that even non-coercive eugenics programs would be inherently unethical, though this view has been challenged by such thinkers as Nicholas Agar.[1]

Eugenicists advocate specific policies that (if successful) would lead to a perceived improvement of the human gene pool. Since defining what improvements are desired or beneficial is arguably a cultural choice rather than a matter that can be determined objectively (e.g. by empirical, scientific inquiry), eugenics has been deemed pseudo-science by many. The most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. This aspect of eugenics has historically been tainted with scientific racism.

Early eugenicists were mostly concerned with perceived intelligence factors that often correlated strongly with social class. Many eugenicists took inspiration from the selective breeding of animals (where purebreeds are often strived for) as their analogy for improving human society. The mixing of races (or miscegenation) was usually considered as something to be avoided in the name of racial purity. At the time, this concept appeared to have some scientific support, and it remained a contentious issue until the advanced development of genetics led to a scientific consensus that the division of the human species into unequal races is unjustifiable.

Eugenics has also been concerned with the elimination of hereditary diseases such as haemophilia and Huntington's disease. However, there are several problems with labeling certain factors as "genetic defects":

Similar concerns have been raised when a prenatal diagnosis of a congenital disorder leads to abortion (see also preimplantation genetic diagnosis).

Eugenic policies have been historically divided into two categories: positive eugenics, which encourage a designated "most fit" to reproduce more often, and negative eugenics, which discourage or prevent a designated "less fit" from reproducing. Negative eugenics need not always be coercive. A state might offer financial rewards to certain people who submit to sterilization, although some critics might reply that this incentive along with social pressure could be perceived as coercion. Positive eugenics can also be coercive. Abortion by "fit" women was illegal in Nazi Germany.

During the twentieth century, many countries enacted various eugenics policies and programs, including:

Most of these policies were later regarded as coercive, restrictive, or genocidal, and now few jurisdictions implement policies that are explicitly labeled as eugenic, or unequivically eugenenic in substance (however labeled). However, some private organizations assist people in genetic counseling, and reprogenetics may be considered as a form of non state-enforced, "liberal" eugenics.

Selective breeding was suggested at least as far back as Plato, who believed human reproduction should be controlled by government. He recorded these views in The Republic. "The best men must have intercourse with the best women as frequently as possible, and the opposite is true of the very inferior." Plato proposed that selection be performed by a fake lottery so people's feelings would not be hurt by any awareness of selection principles. Other ancient examples include the city of Sparta's purported practice of leaving weak babies outside of city borders to die.

Sir Francis Galton initially developed the ideas of eugenics.

During the 1860s and 1870s Sir Francis Galton systematized these ideas and practices according to new knowledge about the evolution of man and animals provided by the theory of his cousin Charles Darwin. After reading Darwin's Origin of Species, Galton noticed an interpretation of Darwin's work whereby the mechanisms of natural selection were potentially thwarted by human civilization. He reasoned that, since many human societies sought to protect the underprivileged and weak, those societies were at odds with the natural selection responsible for extinction of the weakest. Only by changing these social policies, Galton thought, could society be saved from a "reversion towards mediocrity," a phrase that he first coined in statistics, and which later changed to the now common, "regression towards the mean."[2]

Galton first sketched out his theory in the 1865 article "Hereditary Talent and Character," then elaborated it further in his 1869 book Hereditary Genius.[3] He began by studying the way in which human intellectual, moral, and personality traits tended to run in families. Galton's basic argument was that "genius" and "talent" were hereditary traits in humans (although neither he nor Darwin yet had a working model of this type of heredity). He concluded that, since one could use artificial selection to exaggerate traits in other animals, one could expect similar results when applying such models to humans. As he wrote in the introduction to Hereditary Genius:

According to Galton, society already encouraged dysgenic conditions, claiming that the less intelligent were out-reproducing the more intelligent. Galton did not propose any selection methods: rather, he hoped that a solution would be found if social mores changed in a way that encouraged people to see the importance of breeding.

Galton first used the word eugenic in his 1883 Inquiries into Human Faculty and Its Development, a book in which he meant "to touch on various topics more or less connected with that of the cultivation of race, or, as we might call it, with 'eugenic' questions." He included a footnote to the word "eugenic" which read:

In 1904 he clarified his definition of eugenics as:

Galton's formulation of eugenics was based on a strong statistical approach, influenced heavily by Adolphe Quetelet's "social physics." Unlike Quetelet however, Galton did not exhalt the "average man," but decried him as mediocre. Galton and his statistical heir Karl Pearson developed what was called the biometrical approach to eugenics, which developed new and complex statistical models (later exported to wholly different fields) to describe the heredity of traits. However, with the re-discovery of Gregor Mendel's hereditary laws, two separate camps of eugenics advocates emerged. One was made up of statisticians, the other of biologists. Statisticians thought the biologists had exceptionally crude mathematical models while biologists thought the statisticians knew little about biology.[7]

Eugenics eventually referred to human selective reproduction with an intent to create children with desirable traits, generally through the approach of influencing differential birth rates. These policies were mostly divided into two categories: Positive eugenics, the increased reproduction of those seen to have advantageous hereditary traits and negative eugenics, the discouragment of reproduction by those with hereditary traits perceived as poor. Negative eugenic policies in the past have ranged from attempts at segregation to sterilization and even genocide. Positive eugenic policies have typically taken the form of awards or bonuses for "fit" parents who have another child. Relatively innocuous practices like marriage counseling had early links with eugenic ideology.

Eugenics differed from what would later be known as Social Darwinism. While both claimed intelligence was hereditary, eugenics asserted that new policies were needed to actively change the status quo towards a more "eugenic" state, while the Social Darwinists argued society itself would naturally "check" the problem of "dysgenics" if no welfare policies were in place (for example, the poor might reproduce more but would have higher mortality rates).

State policies in some Latin American countries advocated the whitening of society by increased European immigration and the eradication of indigenous populations. This can be seen particularly in Brazil and Argentina; in these countries, this process is known as branqueamento and blanqueamiento, respectively.

One of the earliest modern advocates of eugenic ideas (before they were labeled as such) was Alexander Graham Bell. In 1881 Bell investigated the rate of deafness on Martha's Vineyard, Mass. From this he concluded that deafness was hereditary in nature and recommended a marriage prohibition against the deaf ("Memoir upon the formation of a deaf variety of the human Race"). Like many other early eugenicists he proposed controlling immigration for the purpose of eugenics and warned that boarding schools for the deaf could possibly be considered as breeding places of a deaf human race.

Though eugenics is today often associated with racism, it was not always so; both W.E.B. DuBois and Marcus Garvey supported eugenics or ideas resembling eugenics as a way to reduce African American suffering and improve the stature of African Americans.

"We do not stand alone": Nazi poster from 1936 with flags of other countries with compulsory sterilization legislation.

Nazi Germany under Adolf Hitler was infamous for eugenics programs which attempted to maintain a "pure" German race through a series of programs which ran under the banner of "racial hygiene." Among other activities, the Nazis performed extensive experimentation on live human beings to test their genetic theories, ranging from simple measurement of physical characteristics to the more ghastly experiments carried out by Josef Mengele for Otmar von Verschuer on twins in the concentration camps. During the 1930s and 1940s the Nazi regime forcibly sterilized hundreds of thousands of people whom they viewed as mentally and physically "unfit", an estimated 400,000 between 1934 and 1937. The scale of the Nazi program prompted American eugenics advocates to seek an expansion of their program, with one complaining that 'the Germans are beating us at our own game."[8] The Nazis went further however, killing tens of thousands of the institutionalized disabled through compulsory "euthanasia" programs.[9]

Nazi propaganda for their compulsory "euthanasia" program: "This person suffering from hereditary defects costs the community 60,000 Reichsmark during his lifetime. Fellow German, that is your money, too."

They also implemented a number of "positive" eugenics policies, giving awards to "Aryan" women who had large numbers of children and encouraged a service in which "racially pure" single women were impregnated by SS officers (Lebensborn). Many of their concerns for eugenics and racial hygiene were also explicitly present in their systematic killing of millions of "undesirable" people including Jews, gypsies, Jehovah's Witnesses and homosexuals during the Holocaust (and much of the killing equipment and methods employed in the death camps were first developed in their euthanasia program). The scope and coercion involved in the German eugenics programs along with a strong use of the rhetoric of eugenics and so-called "racial science" throughout the regime created an indelible cultural association between eugenics and the Third Reich in the postwar years.[10]

The second largest eugenics movement was in the United States. Beginning with Connecticut in 1896 many states enacted marriage laws with eugenic criteria, prohibiting anyone who was "epileptic, imbecile or feeble-minded" from marrying. In 1898 Charles B. Davenport, a prominent American biologist began as director of a biological research station based in Cold Spring Harbor where he experimented with evolution in plants and animals. In 1904 Davenport received funds from the Carnegie Institution to found the Station for Experimental Evolution. The Eugenics Record Office opened in 1910 while Davenport and Harry H. Laughlin began to promote eugenics.[11]

A pedigree chart from The Kallikak Family meant to show how one "illicit tryst" could lead to an entire generation of "imbeciles".

In years to come the ERO collected a mass of family pedigrees and concluded that those who were unfit came from economically and socially poor backgrounds. Eugenicists such as Davenport, the psychologist Henry H. Goddard and the conservationist Madison Grant (all well respected in their time) began to lobby for various solutions to the problem of the "unfit" (Davenport favored immigration restriction and sterilization as primary methods, Goddard favored segregation in his The Kallikak Family, Grant favored all of the above and more, even entertaining the idea of extermination).[12] Though their methodology and research methods are now understood as highly flawed, at the time this was seen as legitimate scientific research. It did, however, have scientific detractors (notably Thomas Hunt Morgan, one of the few Mendelians to explicitly criticize eugenics), though most of these focused more on what they considered the crude methodology of eugenicists, and the characterization of almost every human characteristic as being hereditary, rather than the idea of eugenics itself.[13]

The idea of "genius" and "talent" is also considered by William Graham Sumner, a founder of the American Sociological Society (now called the American Sociological Association). He maintained that if the government did not meddle with the social policy of laissez faire, a class of genius would rise to the top of the system of social stratification, followed by a class of talent. Most of the rest of society would fit into the class of mediocrity. Those who were considered to be defective (mentally retarded, handicapped, etc.) had a negative effect on social progress by draining off necessary resources. They should be left on their own to sink or swim. But, those in the class of delinquent (criminals, deviants, etc.) should be eliminated from society. "Folkways," 1907.

In 1924, the Immigration Act of 1924 was passed, with eugenicists for the first time playing a central role in the Congressional debate as expert advisers on the threat of "inferior stock" from Eastern and Southern Europe. This reduced the number of immigrants from abroad to fifteen percent from previous years, to control the number of "unfit" individuals entering the country. The new Act strengthened existing laws prohibiting race mixing in an attempt to maintain the gene pool.[14] Eugenic considerations also lay behind the adoption of incest laws in much of the USA and were used to justify many anti-miscegenation laws.[15]

Some states sterilized "imbeciles" for much of the 20th century. The US Supreme Court ruled in the 1927 Buck v. Bell case that the state of Virginia could sterilize those they thought unfit. The most significant era of eugenic sterilization was between 1907 and 1963 when over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States.[16] A favorable report on the results of sterilization in California, by far the state with the most sterilizations, was published in book form by the biologist Paul Popenoe and was widely cited by the Nazi government as evidence that wide-reaching sterilization programs were feasible and humane. When Nazi administrators went on trial for war crimes in Nuremberg after World War II they justified the mass-sterilizations (over 450,000 in less than a decade) by citing the United States as their inspiration.[17]

Almost all non-Catholic western nations adopted some eugenics legislation. In July 1933 Germany passed a law allowing for the involuntary sterilization of "hereditary and incurable drunkards, sexual criminals, lunatics, and those suffering from an incurable disease which would be passed on to their offspring..."[18] Sweden forcibly sterilized 62,000 "unfits" as part of a eugenics program over a forty-year period. Similar incidents occurred in Canada, United States, Australia, Norway, Finland, Denmark, Estonia, Switzerland and Iceland for people the government declared to be mentally deficient. Singapore practiced a limited form of "positive" eugenics that involved encouraging marriage between college graduates in the hope they would produce better children.[19]

Various authors, notably Stephen Jay Gould, have repeatedly asserted that restrictions on immigration passed in the United States during the 1920s (and overhauled in 1965) were motivated by the goals of eugenics, in particular a desire to exclude "inferior" races from the national gene pool. During the early twentieth century the United States and Canada began to receive far higher numbers of southern and eastern European immigrants. Influential eugenicists like Lothrop Stoddard and Harry Laughlin (who was appointed as an expert witness for the House Committee on Immigration and Naturalization in 1920) presented arguments that these were inferior races that would pollute the national gene pool if their numbers went unrestricted. It has been argued that this stirred both Canada and the United States into passing laws creating a hierarchy of nationalities, rating them from the most desirable Anglo-Saxon and Nordic peoples to the Chinese and Japanese immigrants who were almost completely banned from entering the country.[20] However several people, in particular Franz Samelson, Mark Snyderman and Richard Herrnstein, have argued that, based on their examination of the records of the Congressional debates over immigration policy, Congress gave virtually no consideration to these factors. According to these authors, the restrictions were motivated primarily by a desire to maintain the country's cultural integrity against a heavy influx of foreigners.[21] This interpretation is not, however, accepted by most historians of eugenics.

Some who disagree with the idea of eugenics in general contend that eugenics legislation still had benefits. Margaret Sanger (founder of Planned Parenthood of America) found it a useful tool to urge the legalization of contraception. In its time, eugenics was seen by many as scientific and progressive, the natural application of knowledge about breeding to the arena of human life. Before the death camps of World War II, the idea that eugenics could lead to genocide was not taken seriously.

After the experience of Nazi Germany many ideas about "racial hygiene" and "unfit" members of society were publicly renounced by politicians and members of the scientific community. The Nuremberg Trials against former Nazi leaders revealed to the world many of the regime's genocidal practices and resulted in formalized policies of medical ethics and the 1950 UNESCO statement on race. Many scientific societies released their own similar "race statements" over the years and the Universal Declaration of Human Rights, developed in response to abuses during the second World War, was adopted by the United Nations in 1948 and affirmed "Men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family." [2] In continuation the 1978 UNESCO declaration on race and racial prejudice states that the fundamental equality of all human beings is the ideal toward which ethics and science should converge. [3]

In reaction to Nazi abuses, eugenics became almost universally reviled in many of the nations where it had once been popular (however some eugenics programs, including sterilization, continued quietly for decades). Many pre-war eugenicists engaged in what they later labeled "crypto-eugenics," purposefully taking their eugenic beliefs "underground" and becoming respected anthropologists, biologists and geneticists in the post-war world (including Robert Yerkes in the USA and Otmar von Verschuer in Germany). Californian eugenicist Paul Popenoe founded marriage counseling during the 1950s, a career change which grew from his eugenic interests in promoting "healthy marriages" between "fit" couples.[22]

High school and college textbooks from the 1920s through the 40s often had chapters touting the scientific progress to be had from applying eugenic principles to the population. Many early scientific journals devoted to heredity in general were run by eugenicists and featured eugenics articles alongside studies of heredity in non-human organisms. After eugenics fell out of scientific favor, most references to eugenics were removed from textbooks and subsequent editions of relevant journals. Even the names of some journals changed to reflect new attitudes. For example, "Eugenics Quarterly" became "Social Biology" in 1969 (the journal still existed in 2005 though it looked little like its predecessor). Notable members of the American Eugenics Society (1922-1994) during the second half of the 20th Century included Joseph Fletcher (originator of Situational ethics), Dr. Clarence Gamble of the Procter & Gamble fortune and Garrett Hardin, a population control advocate and author of The Tragedy of the Commons.

Despite the changed post-war attitude towards eugenics in the US and some European countries, a few nations, notably Canada and Sweden, maintained large-scale eugenics programs, including forced sterilization of mentally handicapped individuals, as well as other practices, until the 1970s. In the United States, sterilizations capped off in the 1960s, though the eugenics movement had largely lost most popular and political support by the end of the 1930s.[23]

Beginning in the 1980s the history and concept of eugenics were widely discussed as knowledge about genetics advanced significantly. Endeavors such as the Human Genome Project made the effective modification of the human species seem possible again (as did Darwin's initial theory of evolution in the 1860s, along with the rediscovery of Mendel's laws in the early 20th century). The difference at the beginning of the 21st century was the guarded attitude towards eugenics, which had become a watchword to be feared rather than embraced.

Only a few scientific researchers (such as the controversial psychologist Richard Lynn) have openly called for eugenic policies using modern technology but they represent a minority opinion in current scientific and cultural circles.[24] One attempted implementation of a form of eugenics was a "genius sperm bank" (1980-1999) created by Robert Klark Graham, from which nearly 230 children were conceived (the best known donor was Nobel Prize winner William Shockley). In the USA and Europe though, these attempts have frequently been criticized as in the same spirit of classist and racist forms of eugenics of the 1930s. Results, in any case, have been spotty at best.

Because of its association with compulsory sterilization and the racial ideals of the Nazi Party, the word eugenics is rarely used by the advocates of such programs.

Only a few governments in the world had anything resembling eugenic programs today. In 1994 China passed the "Maternal and Infant Health Care Law" which included mandatory pre-marital screenings for "genetic diseases of a serious nature" and "relevant mental disease." Those who were diagnosed with such diseases were required either to not marry, agree to "long term contraceptive measures" or to submit to sterilization. This law was repealed in 2004.

A similar screening policy (including pre-natal screening and abortion) intended to reduce the incidence of thalassemia exists on both sides of the island of Cyprus. Since the program's implementation in the 1970s, it has reduced the ratio of children born with the hereditary blood disease from 1 out of every 158 births to almost zero. Dor Yeshorim, a program which seeks to reduce the incidence of Tay-Sachs disease among certain Jewish communities, is another screening program which has drawn comparisons with eugenics. In Israel, at the expense of the state, the general public is advised to carry out genetic tests to diagnose the disease before the birth of a baby. If an unborn baby is diagnosed with Tay-Sachs the pregnancy may be terminated, subject to consent. Most other Ashkenazi Jewish communities also run screening programmes due to the higher incidence of the disease. In some Jewish communities, the ancient custom of matchmaking (shidduch) is still practised, and in order to attempt to prevent the tragedy of infant death which always results from being homozygous for Tay-Sachs, associations such as the strongly observant Dor Yeshorim (which was founded by a rabbi who lost four children to the condition in order to prevent others suffering the same tragedy) test young couples to check whether they carry a risk of passing on this disease or certain other fatal conditions. If both the young man and young woman are Tay-Sachs carriers, it is common for the match to be broken off. Judaism, like numerous other religions, discourages abortion unless there is a risk to the mother, in which case her needs take precedence. It should also be noted that, since all those with the condition will die in infancy, these programs aim to prevent these tragedies rather than directly eradicate the gene, which is a co-incidental by-product.

In modern bioethics literature, the history of eugenics presents many moral and ethical questions. Commentators have suggested the new "eugenics" will come from reproductive technologies that will allow parents to create so-called "designer babies" (what the biologist Lee M. Silver prominently called "reprogenetics"). It has been argued that this "non-coercive" form of biological "improvement" will be predominantly motivated by individual competitiveness and the desire to create "the best opportunities" for children, rather than an urge to improve the species as a whole, which characterized the early twentieth century forms of eugenics. Because of this non-coercive nature, lack of involvement by the state and a difference in goals, some commentators have questioned whether such activities are eugenics or something else all together.

Some disability activists argue that, although their impairments may cause them pain or discomfort, what really disables them as members of society is a socio-cultural system that does not recognise their right to genuinely equal treatment. They express skepticism that any form of eugenics could be to the benefit of the disabled considering their treatment by historical eugenic campaigns.

James D. Watson, the first director of the Human Genome Project, initiated the Ethical, Legal and Social Implications Program (ELSI) which has funded a number of studies into the implications of human genetic engineering (along with a prominent website on the history of eugenics), because:

Distinguished geneticists including Nobel Prize winners John Sulston ("I don't think one ought to bring a clearly disabled child into the world")[26] and Watson ("Once you have a way in which you can improve our children, no one can stop it.")[27] support genetic screening. Which ideas should be described as "eugenic" are still controversial in both public and scholarly spheres. Some observers such as Philip Kitcher have described the use of genetic screening by parents as making possible a form of "voluntary" eugenics.[28]

Some modern subcultures advocate different forms of eugenics assisted by human cloning and human genetic engineering, sometimes even as part of a new cult (see Ralism, Cosmotheism, or Prometheism). These groups also talk of "neo-eugenics", "conscious evolution", or "genetic freedom" .

Behavioral traits often identified as potential targets for modification through human genetic engineering include intelligence, depression, schizophrenia, alcoholism, sexual behavior (and orientation) and criminality.

Most recently in the United Kingdom a court case, the Crown v. James Edward Whittaker-Williams, arguably set a precedent of banning sexual contact between people with learning disabilities. The accused, a man suffering learning disabilities was jailed for kissing and hugging a woman with learning disabilities. This was done under the 2003 Sexual Offences Act which redefines kissing and cuddling as sexual and states that those with learning difficulties are unable to give consent regardless of whether or not the act involved coercion. Opponents of the act have attacked it as bringing in eugenics through the backdoor under the guise of a requirement of "consent".

While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics and culture, there is at this point no agreed objective means of determining which traits might be ultimately desirable or undesirable. Would eugenic manipulations that reduce the propensity for risk-taking and violence, for example, in a population lead to their extinction? On the other hand, there is universal agreement that many genetic diseases, such as Tay Sachs, spina bifida, Hemochromatosis, Down syndrome, Rh disease, etc. are quite harmful to the affected individuals and their families and therefore to the societies to which they belong. Eugenic measures against many of the latter diseases are already being undertaken in societies around the world, while measures against traits that affect more subtle, poorly understood traits, such as risk-taking, are relegated to the realm of speculation and science fiction. The effects of diseases are essentially wholly negative, and societies everywhere seek to reduce their impact by various means, some of which are eugenic in all but name. The other traits that are discussed have positive as well as negative effects, and are not generally targeted at present anywhere.

A commonly advanced criticism of eugenics is that, evidenced by its history, it inevitably leads to measures that are unethical (Lynn 2001). H. L. Kaye wrote of "the obvious truth that eugenics has been discredited by Hitler's crimes" (Kaye 1989). R. L. Hayman argued "the eugenics movement is an anachronism, its political implications exposed by the Holocaust" (Hayman 1990).

Steven Pinker has stated that it is "a conventional wisdom among left-leaning academics that genes imply genocide." He has responded to this "conventional wisdom" by comparing the history of Marxism, which had the opposite position on genes to that of Nazism:

Richard Lynn argues that any social philosophy is capable of ethical misuse. Though Christian principles have aided in the abolition of slavery and the establishment of welfare programs, he notes that the Christian church has also burned many dissidents at the stake and waged wars against nonbelievers in which Christian crusaders slaughtered large numbers of women and children. Lynn argues the appropriate response is to condemn these killings, but believing that Christianity "inevitably leads to the extermination of those who do not accept its doctrines" is unwarranted (Lynn 2001).

Eugenic policies could also lead to loss of genetic diversity, in which case a culturally accepted improvement of the gene pool may, but would not necessarily, result in biological disaster due to increased vulnerability to disease, reduced ability to adapt to environmental change and other factors both known and unknown. This kind of argument from the precautionary principle is itself widely criticized. A long-term eugenics plan is likely to lead to a scenario similar to this because the elimination of traits deemed undesireable would reduce genetic diversity by definition.

The possible elimination of the autism genotype is a significant political issue in the autism rights movement which claims autism is a form of neurodiversity.

One website on logic has used the statement "Eugenics must be wrong because it was associated with the Nazis" as a typical example of the association fallacy. [5] The stigmatization of eugenics because of its association, on the other hand, has not at all slowed the application of medical technologies that decrease the incidence of birth defects, or to slow the search for their causes.

In some instances, efforts to eradicate certain single-gene mutations would be nearly impossible. In the event the condition in question was a heterozygous recessive trait, the problem is that by eliminating the visible unwanted trait, there are still as many genes for the condition left in the gene pool as were eliminated according to the Hardy-Weinberg principle which states that a populations genetics are defined as pp+2pq+qq at equilibrium. With genetic testing, it may be possible to detect all of the heterozygous recessive traits, but only at great cost.

View post:

Eugenics | Psychology Wiki | FANDOM powered by Wikia

What Is The Blockchain? – Pixel Privacy

The Plain English Version

The word blockchain is on everyones lips right now. Although you might not really understand what blockchain means, chances are youve heard people talk about it (a lot).

Some people believe it will change our world for the better, and it will replace the banks that we currently know - or similar extravagant things.

Although the blockchain came to the general publics attention when the crypto market absolutely exploded, the majority of the people still dont understand how it works, what its used for or what all the fuss is about.

In this guide, Ill answer all of your questions in regards to the blockchain, nodes, the ledger(s) and the security of the blockchain. The goal is to explain the blockchain in plain English, so anyone can understand it.

Note: In this guide, Ill be explaining how the concept of the blockchain works - I wont be explaining how the blockchain technology is implemented in detail, as thats beyond the scope of this article.

Lets get started.

What Is the Concept of Blockchain?

First, its important to understand two different, basic terms.

1. Bitcoin (a digital currency)

Its important to understand that Bitcoin is not a blockchain. This is something that has many people confused. Bitcoin is merely a digital currency based on the blockchain technology.

2. Blockchain

Blockchain is the technology that enables the movement of digital assets - Bitcoin, for example - from one individual person to another individual.

So, what is the concept of the blockchain, exactly?

To get a better understanding, lets look at an example of an existing problem that the blockchain attempts to solve. Im talking about transferring money.

Imagine I (Bill) want to send money to my friend Jenny. Traditionally, this is done through a trusted third party (bank or credit card company) between us.

For the sake of the example, lets say that I live in New York, while Jenny lives in London.

When I want to send some money to Jenny, I ask the third party to send it to her. In return, the third party will identify Jenny and her bank account.

When thats completed, the third party will transfer the money to Jennys personal account, while also taking a small transaction fee.

This process typically takes 3-5 days. Some banks process such requests faster than others, but it takes some time.

Now, the entire concept behind the blockchain technology focuses on the elimination of that trusted third party, the middle man.

In addition, the blockchain aims to complete this process much faster than the current system - almost instantly!

Finally, the blockchain attempts to do this process at a much lower rate (very low transaction fees).

Lets take a look at how the blockchain technology addresses this money transfer problem. There are various different principles/concepts involved.

On Investopedia, a ledger is defined as:

A company's set of numbered accounts for its accounting records. The ledger provides a complete record of financial transactions over the life of the company.

The ledger holds account information that is needed to prepare financial statements and includes accounts for assets, liabilities, owners' equity, revenues and expenses.

Simply put, the ledger is where a chain of transactions are linked to one another, creating a record of financial transactions.

The term Open Ledger means that anyone can join the network of the ledger, and all transactions are stored in one ledger. The network saves all the transaction data in one centralized ledger (storage).

A Distributed Ledger is essentially the same as an Open Ledger in that both are accessible to anyone. The main difference here is that a distributed ledger is decentralized, meaning that everyone in the network owns a copy of the ledger on a node.

A node is another word for a device that every participant in the network possesses, which contains a copy of the open ledger.

You can use the term node as shorthand for a participant in the chain of the distributed ledger. So, the examples below contain 4 nodes.

Open Ledger

In order to give you a crystal-clear understanding of this concept, Ill first illustrate it with an example.

For instance, imagine that theres a network of 4 individuals: Bill, Jenny, Mark and Justin. Every individual in this network wants to send and receive money to/from one another.

At the original formation of this network, I (Bill) have $20. The concept of an open ledger and how its implemented in this situation works like the following.

Imagine that there are a couple of transactions made between these 4 individuals.

Every transaction is registered and linked to the already-existing transactions in the open ledger. That means that, in this example, there are 4 transaction links in the chain of the open ledger, which tells which transaction is made and to whom.

The chain of transactions in a open ledger are open to the public and for anyone to access and see. That means that everyone in the network can identify where the money is and how much money every individual has.

In addition, every individual in the network of the open ledger can decide whether a transaction in the chain is valid or invalid.

Lets go back to the example above, where I owned $20 at the genesis of the network (the start of the chain) and I transferred $10 to Jenny (so, I have $10 left). But now, I attempt to send another $15 to Mark.

As a result, everyone on the network is able to identify this transaction as an invalid transaction, since Im short $5. Therefore, this transaction wont be added to the chain of transactions in the open ledger.

Distributed Ledger

One of the most important goals of the blockchain technology is to create a decentralized solution. The concept of a decentralized chain of connections is called the distributed ledger.

In other words, every individual (node) in the network will receive a copy of the ledger. That means that I will hold a copy of the ledger in my node, as will Jenny, Justin and Mark.

At this point, the goal to eliminate the previously-mentioned centralized trusted third party has been accomplished.

However, by doing so, a new problem was created, because we now have 4 different copies of the ledger in the network of transactions. Its crucial that every individual (participant) in the network owns a properly synchronized version, so every node contains the same copy of the ledger.

To solve this issue, lets continue by looking at another principle of blockchain technology.

So, you now know that the distributed ledger is an open network which is accessible to everyone. The copy of the ledger is distributed across all the nodes (participants) within the network of the ledger.

But how are new transactions validated within a network?

For the sake of the example, lets pretend that I want to move $10 to Jenny. When I make this transaction, I will automatically share and publish my intended transaction to the network.

As a result, every participant in the network will be notified of this intended transaction and will see that I want to move $10 to Jenny. Because my transaction has not been validated yet, its still an unvalidated transaction, which means that its not recorded in the ledger yet.

In order to get a transaction validated so that its recorded in the ledger, another principle is needed: mining. The principle of mining means solving calculations.

Miners (who do the mining) in, for example, Bitcoins, are special nodes. The node of every miner is able to hold the ledger (because its public).

For instance, imagine that both Mark and Justin are miners in this particular ledger. Miners execute a very important task. All the miners in a ledger compete with each other, so Mark and Justin will be miner competitors here.

Mark and Justin (the miners) will compete to become the first to take my unvalidated transaction to Jenny, validate the transaction and then add it into the chain of the ledger.

The miner that does this the fastest (first) receives a financial reward, like a prize. In this case, because it was an example of Bitcoin miners, the financial reward will be Bitcoin.

The concept of how bitcoins are generated is extremely complex, and beyond the scope of this article. All that you need to know is that, very simply, the financial reward of bitcoins is generated through the computational process of validating the transaction, not through Jenny or Bill paying the miners bitcoins.

For more information about the generation of bitcoins, visit AnythingCryptos website.

So, what are the rules for Mark and Justin to beat the competition and get a financial reward?

The miner has to do 2 things in order to take the intended transaction and put it into the ledger.

Step #1 - Validate the new transaction

Validating the new transaction is relatively simple because the information in the ledger is openly accessible, so the miner can instantly calculate whether I have sufficient funds in order to complete the transfer to Jenny.

Step #2 - Find a special key

In order to lock the new transaction in the chain of the ledger, the miner has to find a special key that will enable this process. The key itself is random.

Therefore, Mark and Justin need to use computational power in order to search for this random key, which takes time. By doing so, the miner will use the mathematical abilities of a computer to repeatedly guess new keys until the computer finds the first key that matches the random puzzle.

Again, if Mark is able to do that first, he will get the financial reward.

When this process is completed, Mark has to synchronize the distributed ledger across the entire network.

How Are Ledgers Synchronized Across the Network?

So, lets continue with the example that we created in the previous section.

The question you might have now is:

How could each node be synchronized to have the same record of transactions?

This is an essential principle of the blockchain technology, because this will solve the problem of how to have the same copy of the ledger on every node in the network.

Mark was able to solve the puzzle first and was therefore able to add the transaction to the chain in his ledger. Now, Mark has to publish the solution that he found to the entire network.

That means that hes telling Justin, Jenny and I that he solved the puzzle and validated the transaction (the transaction that I wanted to send to Jenny).

When Mark updates us about that, he will also provide the lock (a key) that will enable the rest of us (participants on this network) to take the transaction and add it into our own ledgers.

So, Justin (miner) will add this transaction to his ledger, because theres no point anymore in trying to resolve this transaction, since it has been solved by Mark already, who got the financial reward. Justin will search for another transaction to work on (and solve) in order to get a financial reward for that.

As a result, the transaction will also be added the chain of transactions in Jennys and my ledger. Jenny will receive the $10 that I initially proposed to transfer to her in the network, too, because everyone in the network has agreed that this transaction is valid.

At this point, all of the distributed ledgers in the network are updated and contain the exact same chain of transactions.

The Blocks in the Chain

Since you now have a better understanding of the basic concept of the blockchain, its time to dive a little deeper in order to get a better understanding of the blocks in the chain.

The blockchain is - you guessed it - a chain of blocks.

Each block in the chain contains some specific data.

The type of data stored in a block depends on the type of blockchain. For example, a block in the Bitcoin blockchain stores information such as the number of bitcoins in the block, who sent the bitcoins and who received them.

If the blockchain belongs to another cryptocurrency like Ethereum, the block contains information about Ethereum instead of Bitcoin.

A hash could look this this:

82e35a613ceba37e9652366234c5dd412ea586147f1e4a41ccde16149238187e3dbf9

A hash is always unique - it contains a string of random letters and numbers. The unique string basically holds the information of what content is stored in the block.

When a block is created, the unique hash that belongs to the block is then calculated. When something in the block changes - for example, if the number of bitcoins goes down by 2 - the hash will also change.

That means that when the hash changes, its no longer part of the same block. So, a new block is created.

Hash of the Previous Block

Every newly-created block also contains the unique hash string of the previous block. That way, all the blocks are connected to each other.

As you can see in the example, every block is connected to the previous block by stating the hash of that block.

The first block doesnt contain a previous hash, simply because there wasnt a block before that. The name of the first block in the chain is also called the Genesis block.

If someone tries to interfere with an already-existing block or change something in the block, the hash will change. That means that all of the following blocks will become invalid because they contained a different hash than the newly-generated hash, because a change was made.

In order to solve the issue of the other blocks becoming invalid, all the other block hashes must be calculated again.

To counter that vulnerability, theres a piece of data involved called proof of work, which basically slows down the creation of new blocks. The difficulty for miners to create new blocks is controlled so that the time required to solve a calculation and to create a new block is possible only every 10 minutes.

That means that if you were to interfere with or change one block in the chain, you would need to recalculate all the following blocks in the chain, 1 per 10 minutes - which is way too time-consuming and expensive.

Another layer of security is a Peer-to-peer (P2P) network. The existence of a P2P network ensures that the blockchain is distributed across a large network, instead of being stored in one single entity.

As you learned before, this is an open and public network that anyone can join. When you enter a network, youll receive a copy of the blockchain.

Visit link:

What Is The Blockchain? - Pixel Privacy