What is Bitcoin Cash? – Coin Rivet

For many newcomers, cryptocurrencies can be confusing at the best of times. Not only are they extremely complex, but there are also so many of them to choose from.

Bitcoin itself is no stranger to this. There are multiple iterations of Bitcoin, from the original BTC to Bitcoin Gold and Bitcoin Private. The biggest competitor to Bitcoin though is Bitcoin Cash (BCH). BCH is a hard fork of Bitcoin that aims to solve the issue of scaling through the use of bigger blocks.

Bitcoin Cash arose due to a large scaling debate that happened within the Bitcoin community. Debates began to arise when the Bitcoin mempool began to fill up due to the amount of transactions taking place on the network. This caused Bitcoin to become slower and more expensive to send than it had been in the past.

There were two options depending on your viewpoint. The first was to scale by increasing the block size of Bitcoin, and the second was to scale via a second-layer solution such as the Lightning Network. When neither side could come to a compromise, a fork took place and led to the creation of what became known as Bitcoin Cash.

Bitcoin Cash was backed by evangelist Roger Ver and mining giant Jihan Wu along with many other industry leaders and experts. They disagreed with the idea of implementing SegWit onto Bitcoin and wanted to see Bitcoin scale to 8MB blocks.

Bigger blocks allow for more transactions to take place. However, this comes with the downside of creating a larger blockchain. Those who believe in BTC argue that bigger blocks will eventually lead to mining centralisation.

BCH supporters argue that through Moores Law technology will eventually catch up, allowing for bigger blocks to be possible without these centralisation issues.

Bigger blocks are believed to be necessary due to the fees associated with Bitcoin. When the network became extremely popular in the bull run of 2017, fees and transaction times began to rise considerably. This made it clear that Bitcoin needed to scale.

Bitcoin Cash believes that it has solved these problems through bigger blocks, which it argues allows for much lower fees.

It is impossible to discuss Bitcoin Cash without mentioning evangelist Roger Ver. Ver was one of the first people to promote Bitcoin to the world. He was an early investor in the cryptocurrency and many major cryptocurrency companies today were helped by his funding. As the owner of the bitcoin.com domain, he holds a powerful position.

Ver argues that the direction that BTC has taken has limited the cryptocurrency and allowed other altcoins to rise in prominence. He argues that Bitcoin Cash is the true Bitcoin as it is a form of peer-to-peer electronic cash, as stated in the white paper.

This has not been without controversy, and resulted in much antagonism directed towards Ver. Some have argued that Ver has misled the public in his promotion of Bitcoin Cash as the real Bitcoin an accusation he vehemently denies.

BCH went through its own drama in late 2018. After the split from BTC, BCH was led by Roger Ver, Jihan Wu, and development teams including Bitcoin Unlimited and Bitcoin ABC. They were also supported by Craig Wright of nChain and his partner Calvin Ayre.

However, their relationship soured, and another fork took place splitting Bitcoin Cash into BCH and Bitcoin Satoshis Vision (BSV).

Many members of the Bitcoin Cash community are on the r/btc subreddit. The r/btc subreddit is another split from the original r/bitcoin subreddit. The drama began when users argued that the r/bitcoin subreddit was too heavily moderated, therefore limiting free speech.

This led to the creation of r/btc, and this is where you can find the most up-to-date news on Bitcoin Cash and debates surrounding the cryptocurrency. If you want the latest news and to join the community, this is the place to start.

There are many fervent supporters of Bitcoin Cash who believe that on-chain scaling is the main solution to the current scaling issues. Although it has yet to make a dent in overtaking the original Bitcoin chain, their beliefs have not diminished. This is the main difference between Bitcoin Cash and Bitcoin the debate over scaling on-chain or via a second layer.

Arguments over the split still rage on to this day, with both sides not conceding any ground. Whilst many deride Bitcoin Cash, there is an argument to be made that the testing of an on-chain scaling solution is a good experiment for the whole of cryptocurrency.

See the original post here:

What is Bitcoin Cash? – Coin Rivet

Moon Cash | Free bitcoin cash faucet

We take your privacy seriously. This policy describes what personal information we collect and how we use it. (This privacy policy is applicable to the Moon Cash web site)

All web servers track basic information about their visitors. This information includes, but is not limited to, IP addresses, browser details, timestamps and referring pages. None of this information can personally identify specific visitors to this site. The information is tracked for routine administration and maintenance purposes, and lets me know what pages and information are useful and helpful to visitors.

Where necessary, this site uses cookies to store information about a visitor’s preferences and history in order to better serve the visitor and/or present the visitor with customized content.

Advertising partners and other third parties may also use cookies, scripts and/or web beacons to track visitors to our site in order to display advertisements and other useful information. Such tracking is done directly by the third parties through their own servers and is subject to their own privacy policies.

Note that you can change your browser settings to disable cookies if you have privacy concerns. Disabling cookies for all sites is not recommended as it may interfere with your use of some sites. The best option is to disable or enable cookies on a per-site basis. Consult your browser documentation for instructions on how to block cookies and other tracking mechanisms.

(a) that Licensor may collect and share hashed or obfuscated user data with third parties; (b) that third parties may collect user data directly from Licensor (e.g. through a cookie or other technology) for their own use and subject to their own privacy policies; (c) that user data may be combined and linked with data from other sources; and (d) a conspicuous link to an industry opt-out mechanism in which Arbor is listed, such as the DAAs opt out page at http://www.aboutads.info/choices or http://www.aboutads.info/appchoices as applicable.

Read the original post:

Moon Cash | Free bitcoin cash faucet

Bitcoin Cash (BCH) price, chart, and fundamentals info …

Bitcoin Cash (BCH) is a cryptocurrency or a form of digital asset. Bitcoin Cash (BCH) price for today is $237.77 with a 24-hour trading volume of $1,276,996,758. Price is down -6.6% in the last 24 hours. It has a circulating supply of 17.8 Million coins and a max supply of 21 Million coins. The most active exchange that is trading Bitcoin Cash is OEX. Explore the address and transactions of Bitcoin Cash on block explorers such as blockchair.com and bch.tokenview.com. Additional information about Bitcoin Cash coin can be found at https://www.bitcoincash.org/.

-0.23%

-6.6%

-18%

-18%

40%

-83%

PriceMarket CapTradingView

We’re indexing our data. Come back later!

Read this article:

Bitcoin Cash (BCH) price, chart, and fundamentals info …

Bitcoin Cash – Wikipedia

cryptocurrency

Bitcoin Cash is a cryptocurrency.[2] In mid-2017, a group of developers wanting to increase bitcoin’s block size limit prepared a code change. The change, called a hard fork, took effect on 1 August 2017. As a result, the bitcoin ledger called the blockchain and the cryptocurrency split in two.[3] At the time of the fork anyone owning bitcoin was also in possession of the same number of Bitcoin Cash units.[3] The technical difference between Bitcoin Cash and bitcoin is that Bitcoin Cash allows larger blocks in its blockchain than bitcoin, which in theory allows it to process more transactions per second.[4]

On 15 November 2018 Bitcoin Cash split into two cryptocurrencies.[5]

Bitcoin Cash is a cryptocurrency[6] and a payment network.[7] In relation to bitcoin it is characterized variously as a spin-off,[6] a strand,[8] a product of a hard fork,[9] an offshoot,[10] a clone,[11] a second version[12] or an altcoin.[13]

The naming of Bitcoin Cash is contentious; it is sometimes referred to as Bcash.[14]

Rising fees on the bitcoin network contributed to a push by some in the community to create a hard fork to increase the blocksize.[15] This push came to a head in July 2017 when some members of the Bitcoin community including Roger Ver felt that adopting BIP 91 without increasing the block-size limit favored people who wanted to treat Bitcoin as a digital investment rather than as a transactional currency.[16][17] This push by some to increase the block size met a resistance. Since its inception up to July 2017, bitcoin users had maintained a common set of rules for the cryptocurrency.[16] Eventually, a group of bitcoin activists,[12] investors, entrepreneurs, developers[16] and largely China based miners were unhappy with bitcoin’s proposed SegWit improvement plans meant to increase capacity and pushed forward alternative plans for a split which created Bitcoin Cash.[11] The proposed split included a plan to increase the number of transactions its ledger can process by increasing the block size limit to eight megabytes.[16][17]

The would-be hard fork with an expanded block size limit was described by hardware manufacturer Bitmain in June 2017 as a “contingency plan” should the Bitcoin community decide to fork; the first implementation of the software was proposed under the name Bitcoin ABC at a conference that month. In July 2017, the Bitcoin Cash name was proposed by mining pool ViaBTC.

On 1 August 2017 Bitcoin Cash began trading at about $240, while Bitcoin traded at about $2,700.[3]

In 2018 Bitcoin Core developer Cory Fields found a bug in the Bitcoin ABC software that would have allowed an attacker to create a block causing a chain split. Fields notified the development team about it and the bug was fixed.[18]

In November 2018, a hard-fork chain split of Bitcoin Cash occurred between two rival factions called Bitcoin ABC and Bitcoin SV.[19] On 15 November 2018 Bitcoin Cash ABC traded at about $289 and Bitcoin SV traded at about $96.50, down from $425.01 on 14 November for the un-split Bitcoin Cash.[5]

The split originated from what was described as a “civil war” in two competing bitcoin cash camps.[20][21] The first camp, led by entrepreneur Roger Ver and Jihan Wu of Bitmain, promoted the software entitled Bitcoin ABC (short for Adjustable Blocksize Cap) which would maintain the block size at 32MB.[21] The second camp led by Craig Steven Wright and billionaire Calvin Ayre put forth a competing software version Bitcoin SV, short for “Bitcoin Satoshi’s Vision,” that would increase the blocksize to 128MB.[19][21]

Controversy

The arguments have devolved over three or four years of bitter debate, the principles are real and they are important to preserve, but a lot of the drama has nothing to do with principles anymore. A lot of this debate is now more about hurt feelings. Its about bruised egos. Its about things that were said that cant be unsaid, insults that were exchanged, and personalities and ego.

Andreas Antonopoulos, “The Verge”

There are two factions of bitcoin supporters, that support large blocks or small blocks.[4] The Bitcoin Cash faction favors the use of its currency as a medium of exchange for commerce while the bitcoin supporting faction view Bitcoin’s primary use as that of a store of value.[4] Some bitcoin supporters like to call Bitcoin Cash Bcash, Btrash, or simply, a scam, while Bitcoin Cash advocates insist that their implementation is the pure form of Bitcoin.[4]

Bitcoin Cash trades on digital currency exchanges including Bitstamp,[22] Coinbase,[23] Gemini,[24] Kraken,[25] and ShapeShift using the Bitcoin Cash name and the BCH ticker symbol for the cryptocurrency. A few other exchanges use the BCC ticker symbol, though BCC is commonly used for Bitconnect. On 26 March 2018, OKEx removed all Bitcoin Cash trading pairs except for BCH/BTC, BCH/ETH and BCH/USDT due to “inadequate liquidity”.[6] As of May2018[update], daily transaction numbers for Bitcoin Cash are about one-tenth of those of bitcoin.[6]

By November 2017 the value of Bitcoin Cash, which had been as high as $900, had fallen to around $300, much of that due to people who had originally held Bitcoin selling off the Bitcoin Cash they received at the hard fork.[15] On 20 December 2017 it reached an intraday high of $4,355.62 and then fell 88% to $519.12 on 23 August 2018.[26]

As of August 2018, Bitcoin Cash payments are supported by payment service providers such as BitPay, Coinify and GoCoin.[27] The research firm Chainanalysis noted that in May 2018, 17 largest payment processing services such as BitPay, Coinify, and GoCoin processed Bitcoin Cash payments worth of US$3.7 million, down from US$10.5 million processed in March.[27]

Originally posted here:

Bitcoin Cash – Wikipedia

Bitcoincash price | index, chart and news | WorldCoinIndex

About

Bitcoin Cash was launched in August 2017, as a direct response to small block sizes on the Bitcoin code. 1MB block sizes were not meeting the demand of the growing community, so a group of dissatisfied crypto enthusiasts decided to create a hard fork of the Bitcoin blockchain, with an increased 8MB block size. No one person currently takes credit for the tokens creation; rather it is attributed to a de-centralized group of developers.

Bitcoin Cash was the first hard fork of Bitcoin, and it inherited and replicated the Bitcoin ledger records up until the point of creation. This means holders of Bitcoin (BTC) received the same amount of Bitcoin Cash (BCH) immediately upon launch. All transactions from that point on are separate, and do not affect each other.

Read more from the original source:

Bitcoincash price | index, chart and news | WorldCoinIndex

Bitcoin Cash (BCH) Price, View BCH Live Value & Buy Bitcoin …

BCH will be open for investment with a limit placed on the daily invested amount. When it reaches its daily limit, it will be closed to new investors and reopened the following day. Closing the investment can be done at any time. Created in August 2017, Bitcoin Cash was diverged from the original Bitcoin blockchain as a result of a hard fork …

Here is the original post:

Bitcoin Cash (BCH) Price, View BCH Live Value & Buy Bitcoin …

Cash App – Bitcoin

Cash App is already the easiest way to send and receive money with friends and family. Weve made it just as easy to buy and sell BTC straight from your Cash App balance. Unlike other apps, most of our buys and sells happen in seconds. You can even spend your proceeds from a free Visa debit card.

Bitcoins price is volatile and unpredictable, so please make wise financial decisions. Dont spend more than you can afford, and review the FAQ and risks to buying Bitcoin before you buy.

Read this article:

Cash App – Bitcoin

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Excerpt from:

Superintelligence – Wikipedia

Nick Bostrom – Wikipedia

Nick Bostrom (; Swedish: Niklas Bostrm [bustrm]; born 10 March 1973)[3] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[4] and is the founding director of the Future of Humanity Institute[5] at Oxford University.

Bostrom is the author of over 200 publications,[6] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[7] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[8] In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.[9][10] Bostrom believes there are potentially great benefits from Artificial General Intelligence, but warns it might very quickly transform into a superintelligence that would deliberately extinguish humanity out of precautionary self-preservation or some unfathomable motive, making solving the problems of control beforehand an absolute priority. His book on superintelligence was recommended by both Elon Musk and Bill Gates. However, Bostrom has expressed frustration that the reaction to its thesis typically falls into two camps, one calling his recommendations absurdly alarmist because creation of superintelligence is unfeasible, and the other deeming them futile because superintelligence would be uncontrollable. Bostrom notes that both these lines of reasoning converge on inaction rather than trying to solve the control problem while there may still be time.[11][12][not in citation given]

Born as Niklas Bostrm in 1973[13] in Helsingborg, Sweden,[6] he disliked school at a young age, and ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] He once did some turns on London’s stand-up comedy circuit.[6]

He received a B.A. degree in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg in 1994, and both an M.A. degree in philosophy and physics from Stockholm University and an M.Sc. degree in computational neuroscience from King’s College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a Ph.D. degree in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[8][14]

Aspects of Bostrom’s research concern the future of humanity and long-term outcomes.[15][16] He introduced the concept of an existential risk,[1] which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[17] and the Fermi paradox.[18][19]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[16]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that “the creation of a superintelligent being represents a possible means to the extinction of mankind”.[20] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[21] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth’s surface and cover it within days.[22] He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[21]

Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, with the attendant possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding of what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[23] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk.[24] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric “evolutionary search” reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence’s intentions might be.[25] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an ‘all or nothing’ offensive action strategy in order to achieve hegemony and assure its survival.[26] Bostrom notes that even current programs have, “like MacGyver”, hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[27]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[28] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being “boxed” (run in a virtual reality simulation), and being used only as an ‘oracle’ to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[21] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its “boxed” isolation.[29]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[30]

Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[31][28] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI’s objectives (“Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”).[32]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute’s open letter warning of the potential dangers of AI.[33] The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.”[34] Cutting edge AI researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention “anything inflammatory about AI”, which Hassabis, took as ‘a win’.[35] Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of AI.[36] Hassabis suggested the main safety measure would be an agreement for whichever AI research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding.[37] Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.[38]

In 1863 Darwin among the Machines, an essay by Samuel Butler predicted intelligent machines’ domination of humanity, but Bostom’s suggestion of deliberate massacre of all humankind is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom’s “nihilistic” speculations indicate he “has been reading too much of the science fiction he professes to dislike”[31] As given in his most recent book, From Bacteria to Bach and Back, renowned philosopher Daniel Dennett’s views remain in contradistinction to those of Bostrom.[39] Dennett modified his views somewhat after reading The Master Algorithm, and now acknowledges that it is “possible in principle” to create “strong AI” with human-like comprehension and agency, but maintains that the difficulties of any such “strong AI” project as predicated by Bostrom’s “alarming” work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away.[40] Dennett thinks the only relevant danger from AI systems is falling into anthropomorphism instead of challenging or developing human users’ powers of comprehension.[41] Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans’ supremacy, environmentalist James Lovelock has moved far closer to Bostrom’s position, and in 2018 Lovelock said that he thought the overthrow of humankind will happen within the foreseeable future.[42][43]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[44]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition with “observer-moments”.

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[45] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:[46][47]

The idea has influenced the views of Elon Musk.[48]

Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”,[49][50] as well as a critic of bio-conservative views.[51]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[49] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”[52]

With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[53]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[54][55]

Bostrom’s theory of the Unilateralist’s Curse[56] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[57]

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[58] He is an advisory board member for the Machine Intelligence Research Institute,[59] Future of Life Institute,[60] Foundational Questions Institute[61] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[62][63]

In response to Bostrom’s writing on artificial intelligence, Oren Etzioni wrote in an MIT Review article, “..predictions that superintelligence is on the foreseeable horizon are not supported by the available data.”[64]

Read more here:

Nick Bostrom – Wikipedia

What is Artificial Superintelligence (ASI)? – Definition …

Most experts would agree that societies have not yet reached the point of artificial superintelligence. In fact, engineers and scientists are still trying to reach a point that would be considered full artificial intelligence, where a computer could be said to have the same cognitive capacity as a human. Although there have been developments like IBM’s Watson supercomputer beating human players at Jeopardy, and assistive devices like Siri engaging in primitive conversation with people, there is still no computer that can really simulate the breadth of knowledge and cognitive ability that a fully developed adult human has. The Turing test, developed decades ago, is still used to talk about whether computers can come close to simulating human conversation and thought, or whether they can trick other people into thinking that a communicating computer is actually a human.

However, there is a lot of theory that anticipates artificial superintelligence coming sooner rather than later. Using examples like Moore’s law, which predicts an ever-increasing density of transistors, experts talk about singularity and the exponential growth of technology, in which full artificial intelligence could manifest within a number of years, and artificial superintelligence could exist in the 21st century.

Read more here:

What is Artificial Superintelligence (ASI)? – Definition …

Chill: Robots Wont Take All Our Jobs | WIRED

None of this is to say that automation and AI arent having an important impact on the economy. But that impact is far more nuanced and limited than the doomsday forecasts suggest. A rigorous study of the impact of robots in manufacturing, agriculture, and utilities across 17 countries, for instance, found that robots did reduce the hours of lower-skilled workersbut they didnt decrease the total hours worked by humans, and they actually boosted wages. In other words, automation may affect the kind of work humans do, but at the moment, its hard to see that its leading to a world without work. McAfee, in fact, says of his earlier public statements, If I had to do it over again, I would put more emphasis on the way technology leads to structural changes in the economy, and less on jobs, jobs, jobs. The central phenomenon is not net job loss. Its the shift in the kinds of jobs that are available.

McAfee points to both retail and transportation as areas where automation is likely to have a major impact. Yet even in those industries, the job-loss numbers are less scary than many headlines suggest. Goldman Sachs just released a report predicting that autonomous cars could ultimately eat away 300,000 driving jobs a year. But that wont happen, the firm argues, for another 25 years, which is more than enough time for the economy to adapt. A recent study by the Organization for Economic Cooperation and Development, meanwhile, predicts that 9 percent of jobs across 21 different countries are under serious threat from automation. Thats a significant number, but not an apocalyptic one.

Of the 271 occupations listed on the 1950 census only oneelevator operatorhad been rendered obsolete by automation by 2010.

Granted, there are much scarier forecasts out there, like that University of Oxford study. But on closer examination, those predictions tend to assume that if a job can be automated, it will be fully automated soonwhich overestimates both the pace and the completeness of how automation actually gets adopted in the wild. History suggests that the process is much more uneven than that. The ATM, for example, is a textbook example of a machine that was designed to replace human labor. First introduced around 1970, ATMs hit widespread adoption in the late 1990s. Today, there are more than 400,000 ATMs in the US. But, as economist James Bessen has shown, the number of bank tellers actually rose between 2000 and 2010. Thats because even though the average number of tellers per branch fell, ATMs made it cheaper to open branches, so banks opened more of them. True, the Department of Labor does now predict that the number of tellers will decline by 8 percent over the next decade. But thats 8 percentnot 50 percent. And its 45 years after the robot that was supposed to replace them made its debut. (Taking a wider view, Bessen found that of the 271 occupations listed on the 1950 census only oneelevator operatorhad been rendered obsolete by automation by 2010.)

Of course, if automation is happening much faster today than it did in the past, then historical statistics about simple machines like the ATM would be of limited use in predicting the future. Ray Kurzweils book The Singularity Is Near (which, by the way, came out 12 years ago) describes the moment when a technological society hits the knee of an exponential growth curve, setting off an explosion of mutually reinforcing new advances. Conventional wisdom in the tech industry says thats where we are nowthat, as futurist Peter Nowak puts it, the pace of innovation is accelerating exponentially. Here again, though, the economic evidence tells a different story. In fact, as a recent paper by Lawrence Mishel and Josh Bivens of the Economic Policy Institute puts it, automation, broadly defined, has actually been slower over the last 10 years or so. And lately, the pace of microchip advancement has started to lag behind the schedule dictated by Moores law.

Corporate America, for its part, certainly doesnt seem to believe in the jobless future. If the rewards of automation were as immense as predicted, companies would be pouring money into new technology. But theyre not. Investments in software and IT grew more slowly over the past decade than the previous one. And capital investment, according to Mishel and Bivens, has grown more slowly since 2002 than in any other postwar period. Thats exactly the opposite of what youd expect in a rapidly automating world. As for gadgets like Pepper, total spending on all robotics in the US was just $11.3 billion last year. Thats about a sixth of what Americans spend every year on their pets.

See the article here:

Chill: Robots Wont Take All Our Jobs | WIRED

Grady Booch: Don’t fear superintelligent AI | TED Talk

New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don’t need to be afraid an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we’ll teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial …

Read more here:

Grady Booch: Don’t fear superintelligent AI | TED Talk

Eugenics – Wikipedia

Theory and practice aiming to improve the genetic quality of the human population through selective breeding

Eugenics (; from Greek eugenes ‘well-born’ from eu, ‘good, well’ and genos, ‘race, stock, kin’)[2][3] is a set of beliefs and practices that aim to improve the genetic quality of a human population by excluding (through a variety of morally criticized means) certain genetic groups judged to be inferior, and promoting other genetic groups judged to be superior.[4][5] The definition of eugenics has been a matter of debate since the term was coined by Francis Galton in 1883. The concept predates the term; Plato suggested applying the principles of selective breeding to humans around 400BCE.

Frederick Osborn’s 1937 journal article “Development of a Eugenic Philosophy” framed it as a social philosophya philosophy with implications for social order.[6] That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits (“positive eugenics”) or reduced rates of sexual reproduction or sterilization of people with less-desired or undesired traits (“negative eugenics”).

Alternatively, by 2014, gene selection (rather than “people selection”) was made possible through advances in genome editing,[7] leading to what is sometimes called new eugenics, also known as “neo-eugenics”, “consumer eugenics”, or “liberal eugenics”.

While eugenic principles have been practiced as early as ancient Greece, the contemporary history of eugenics began in the early 20th century, when a popular eugenics movement emerged in the United Kingdom,[8] and then spread to many countries, including the United States, Canada,[9] and most European countries. In this period, eugenic ideas were espoused across the political spectrum. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations’ genetic stock. Such programs included both positive measures, such as encouraging individuals deemed particularly “fit” to reproduce, and negative measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. Those deemed “unfit to reproduce” often included people with mental or physical disabilities, people who scored in the low ranges on different IQ tests, criminals and “deviants,” and members of disfavored minority groups.

The eugenics movement became associated with Nazi Germany and the Holocaust when many of the defendants at the Nuremberg trials attempted to justify their human rights abuses by claiming there was little difference between the Nazi eugenics programs and the U.S. eugenics programs.[10] In the decades following World War II, with the institution of human rights, many countries gradually began to abandon eugenics policies, although some Western countries, the United States, Canada, and Sweden among them, continued to carry out forced sterilizations.

Since the 1980s and 1990s, with new assisted reproductive technology procedures available, such as gestational surrogacy (available since 1985), preimplantation genetic diagnosis (available since 1989), and cytoplasmic transfer (first performed in 1996), fear has emerged about the possible revival of a more potent form of eugenics after decades of promoting human rights. The State of California Legislature and Governor passed a form of negative eugenics into law via SB 1095 (2016), resulting in a State law requiring the screening for “any disease” (…) “detectable in the blood” prior to birth.[11]

The bill, still law in California, has been widely regarded as a form of scientific racism, though its proponents continue to claim that it is necessary. A system was proposed by California Senator Skinner to compensate victims of the well-documented examples of prison sterilizations resulting from California’s eugenics programs, but this did not pass by the bill’s 2018 deadline in the Legislature.[12]

A major criticism of eugenics policies is that, regardless of whether negative or positive policies are used, they are susceptible to abuse because the genetic selection criteria are determined by whichever group has political power at the time. Furthermore, negative eugenics in particular is criticized by many as a violation of basic human rights, which include the right to reproduce. Another criticism is that eugenics policies eventually lead to a loss of genetic diversity, thereby resulting in inbreeding depression due to a loss of genetic variation. Yet another criticism of contemporary eugenics policies is that they propose to permanently and artificially disrupt millions of years of evolution, and that attempting to create genetic lines “clean” of “disorders” can have far-reaching ancillary downstream effects in the genetic ecology, including negative effects on immunity and species resilience.

Seneca the Younger

The concept of positive eugenics to produce better human beings has existed at least since Plato suggested selective mating to produce a guardian class.[14] In Sparta, every Spartan child was inspected by the council of elders, the Gerousia, which determined if the child was fit to live or not. In the early years of ancient Rome, a Roman father was obliged by law to immediately kill his child if they were physically disabled.[15] Among the ancient Germanic tribes, people who were cowardly, unwarlike or “stained with abominable vices” were put to death, usually by being drowned in swamps.[16][17]

The first formal negative eugenics, that is a legal provision against the birth of allegedly inferior human beings, was promulgated in Western European culture by the Christian Council of Agde in 506, which forbade marriage between cousins.[18]

This idea was also promoted by William Goodell (18291894) who advocated the castration and spaying of the insane.[19][20]

The idea of a modern project of improving the human population through a statistical understanding of heredity used to encourage good breeding was originally developed by Francis Galton and, initially, was closely linked to Darwinism and his theory of natural selection.[22] Galton had read his half-cousin Charles Darwin’s theory of evolution, which sought to explain the development of plant and animal species, and desired to apply it to humans. Based on his biographical studies, Galton believed that desirable human qualities were hereditary traits, although Darwin strongly disagreed with this elaboration of his theory.[23] In 1883, one year after Darwin’s death, Galton gave his research a name: eugenics.[24] With the introduction of genetics, eugenics became associated with genetic determinism, the belief that human character is entirely or in the majority caused by genes, unaffected by education or living conditions. Many of the early geneticists were not Darwinians, and evolution theory was not needed for eugenics policies based on genetic determinism.[22] Throughout its recent history, eugenics has remained controversial.

Eugenics became an academic discipline at many colleges and universities and received funding from many sources.[26] Organizations were formed to win public support and sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals.[27] In 1909 the Anglican clergymen William Inge and James Peile both wrote for the British Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes.[27]

Three International Eugenics Conferences presented a global venue for eugenists with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies were first implemented in the early 1900s in the United States.[28] It also took root in France, Germany, and Great Britain.[29] Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium,[30] Brazil,[31] Canada,[32] Japan and Sweden.

In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbour Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure “Nordic race” or “Aryan” genetic pool and the eventual elimination of “unfit” races.

Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward,[41] the English writer G. K. Chesterton, the German-American anthropologist Franz Boas, who argued that advocates of eugenics greatly over-estimate the influence of biology,[42] and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward’s 1913 article “Eugenics, Euthenics, and Eudemics”, Chesterton’s 1917 book Eugenics and Other Evils, and Boas’ 1916 article “Eugenics” (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Sutherland identified eugenists as a major obstacle to the eradication and cure of tuberculosis in his 1917 address “Consumption: Its Cause and Cure”,[43] and criticism of eugenists and Neo-Malthusians in his 1921 book Birth Control led to a writ for libel from the eugenist Marie Stopes. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben.[44] Other biologists such as J. B. S. Haldane and R. A. Fisher expressed skepticism in the belief that sterilization of “defectives” would lead to the disappearance of undesirable genetic traits.[45]

Among institutions, the Catholic Church was an opponent of state-enforced sterilizations.[46] Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party.[47] The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii.[27] In this, Pope Pius XI explicitly condemned sterilization laws: “Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason.”[48]

As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted[49] various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide.

The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in Mein Kampf in 1925 and emulated eugenic legislation for the sterilization of “defectives” that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as “degenerate” or “unfit”, and therefore led to segregation, institutionalization, sterilization, euthanasia, and even mass murder. The Nazi practice of euthanasia was carried out on hospital patients in the Aktion T4 centers such as Hartheim Castle.

By the end of World War II, many discriminatory eugenics laws were abandoned, having become associated with Nazi Germany.[52] H. G. Wells, who had called for “the sterilization of failures” in 1904,[53] stated in his 1940 book The Rights of Man: Or What are we fighting for? that among the human rights, which he believed should be available to all people, was “a prohibition on mutilation, sterilization, torture, and any bodily punishment”.[54] After World War II, the practice of “imposing measures intended to prevent births within [a national, ethnical, racial or religious] group” fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide.[55] The Charter of Fundamental Rights of the European Union also proclaims “the prohibition of eugenic practices, in particular those aiming at selection of persons”.[56] In spite of the decline in discriminatory eugenics laws, some government mandated sterilizations continued into the 21st century. During the ten years President Alberto Fujimori led Peru from 1990 to 2000, 2,000 persons were allegedly involuntarily sterilized.[57] China maintained its one-child policy until 2015 as well as a suite of other eugenics based legislation to reduce population size and manage fertility rates of different populations.[58][59][60] In 2007 the United Nations reported coercive sterilizations and hysterectomies in Uzbekistan.[61] During the years 2005 to 2013, nearly one-third of the 144 California prison inmates who were sterilized did not give lawful consent to the operation.[62]

Developments in genetic, genomic, and reproductive technologies at the end of the 20th century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject.Some, such as UC Berkeley sociologist Troy Duster, claim that modern genetics is a back door to eugenics.[63] This view is shared by White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a “new era of eugenics”, and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, “where children are increasingly regarded as made-to-order consumer products”.[64] In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.[65]

Lee Kuan Yew, the Founding Father of Singapore, started promoting eugenics as early as 1983.[66][67]

In October 2015, the United Nations’ International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology.[68]

Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term “eugenics” (preferring “germinal choice” or “reprogenetics”)[69] to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.

Prenatal screening can be considered a form of contemporary eugenics because it may lead to abortions of children with undesirable traits.[70]

The term eugenics and its modern field of study were first formulated by Francis Galton in 1883,[71] drawing on the recent work of his half-cousin Charles Darwin.[72][73] Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development.

The origins of the concept began with certain interpretations of Mendelian inheritance and the theories of August Weismann. The word eugenics is derived from the Greek word eu (“good” or “well”) and the suffix -gens (“born”), and was coined by Galton in 1883 to replace the word “stirpiculture”, which he had used previously but which had come to be mocked due to its perceived sexual overtones.[75] Galton defined eugenics as “the study of all agencies under human control which can improve or impair the racial quality of future generations”.[76]

Historically, the term eugenics has referred to everything from prenatal care for mothers to forced sterilization and euthanasia.[77] To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that “the motor bus, by breaking up inbred village communities, was a powerful eugenic agent.”[78] Debate as to what exactly counts as eugenics continues today.[79]

Edwin Black, journalist and author of War Against the Weak, claims eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is often deemed a cultural choice rather than a matter that can be determined through objective scientific inquiry.[80] The most disputed aspect of eugenics has been the definition of “improvement” of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics was tainted with scientific racism and pseudoscience.[81][82][83]

Early eugenists were mostly concerned with factors of perceived intelligence that often correlated strongly with social class. Some of these early eugenists include Karl Pearson and Walter Weldon, who worked on this at the University College London.[23]

Eugenics also had a place in medicine. In his lecture “Darwinism, Medical Progress and Eugenics”, Karl Pearson said that everything concerning eugenics fell into the field of medicine. He basically placed the two words as equivalents. He was supported in part by the fact that Francis Galton, the father of eugenics, also had medical training.[84]

Eugenic policies have been conceptually divided into two categories.[77] Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning.[85] The movie Gattaca provides a fictional example of a dystopian society that uses eugenics to decide what people are capable of and their place in the world. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally “undesirable”. This includes abortions, sterilization, and other methods of family planning.[85] Both positive and negative eugenics can be coercive; abortion for fit women, for example, was illegal in Nazi Germany.[86]

Jon Entine claims that eugenics simply means “good genes” and using it as synonym for genocide is an “all-too-common distortion of the social history of genetics policy in the United States”. According to Entine, eugenics developed out of the Progressive Era and not “Hitler’s twisted Final Solution”.[87]

According to Richard Lynn, eugenics may be divided into two main categories based on the ways in which the methods of eugenics can be applied.[88]

The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective.[126] Despite Morgan’s public rejection of eugenics, much of his genetic research was adopted by proponents of eugenics.[127][128]

The heterozygote test is used for the early detection of recessive hereditary diseases, allowing for couples to determine if they are at risk of passing genetic defects to a future child.[129] The goal of the test is to estimate the likelihood of passing the hereditary disease to future descendants.[129]

Recessive traits can be severely reduced, but never eliminated unless the complete genetic makeup of all members of the pool was known, as aforementioned. As only very few undesirable traits, such as Huntington’s disease, are dominant, it could be argued[by whom?] from certain perspectives that the practicality of “eliminating” traits is quite low.[citation needed]

There are examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not influencing the prevalence of heterozygote carriers of those diseases. The elevated prevalence of certain genetically transmitted diseases among the Ashkenazi Jewish population (TaySachs, cystic fibrosis, Canavan’s disease, and Gaucher’s disease), has been decreased in current populations by the application of genetic screening.[130]

Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect.[131] Andrzej Pkalski, from the University of Wrocaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pekalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together.[132]

Eugenic policies may lead to a loss of genetic diversity. Further, a culturally-accepted “improvement” of the gene pool may result in extinction, due to increased vulnerability to disease, reduced ability to adapt to environmental change, and other factors that may not be anticipated in advance. This has been evidenced in numerous instances, in isolated island populations. A long-term, species-wide eugenics plan might lead to such a scenario because the elimination of traits deemed undesirable would reduce genetic diversity by definition.[133]

Edward M. Miller claims that, in any one generation, any realistic program should make only minor changes in a fraction of the gene pool, giving plenty of time to reverse direction if unintended consequences emerge, reducing the likelihood of the elimination of desirable genes.[134] Miller also argues that any appreciable reduction in diversity is so far in the future that little concern is needed for now.[134]

While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point no agreed objective means of determining which traits might be ultimately desirable or undesirable.[original research?] Some diseases such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual.[citation needed] Reducing the instance of sickle-cell disease genes in Africa where malaria is a common and deadly disease could have severe consequences.[original research?]

Societal and political consequences of eugenics call for a place in the discussion on the ethics behind the eugenics movement.[135] Many of the ethical concerns regarding eugenics arise from its controversial past, prompting a discussion on what place, if any, it should have in the future. Advances in science have changed eugenics. In the past, eugenics had more to do with sterilization and enforced reproduction laws.[136] Now, in the age of a progressively mapped genome, embryos can be tested for susceptibility to disease, gender, and genetic defects, and alternative methods of reproduction such as in vitro fertilization are becoming more common.[137] Therefore, eugenics is no longer ex post facto regulation of the living but instead preemptive action on the unborn.[138]

With this change, however, there are ethical concerns which lack adequate attention, and which must be addressed before eugenic policies can be properly implemented in the future. Sterilized individuals, for example, could volunteer for the procedure, albeit under incentive or duress, or at least voice their opinion. The unborn fetus on which these new eugenic procedures are performed cannot speak out, as the fetus lacks the voice to consent or to express his or her opinion.[139] Philosophers disagree about the proper framework for reasoning about such actions, which change the very identity and existence of future persons.[140]

A common criticism of eugenics is that “it inevitably leads to measures that are unethical”.[141] Some fear future “eugenics wars” as the worst-case scenario: the return of coercive state-sponsored genetic discrimination and human rights violations such as compulsory sterilization of persons with genetic defects, the killing of the institutionalized and, specifically, segregation and genocide of races perceived as inferior.[142] Health law professor George Annas and technology law professor Lori Andrews are prominent advocates of the position that the use of these technologies could lead to such human-posthuman caste warfare.[143][144]

In his 2003 book Enough: Staying Human in an Engineered Age, environmental ethicist Bill McKibben argued at length against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to “improve” themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using as examples Ming China, Tokugawa Japan and the contemporary Amish.[145]

Some, for example Nathaniel C. Comfort from Johns Hopkins University, claim that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making from the state to the patient and their family.[146] Comfort suggests that “the eugenic impulse drives us to eliminate disease, live longer and healthier, with greater intelligence, and a better adjustment to the conditions of society; and the health benefits, the intellectual thrill and the profits of genetic bio-medicine are too great for us to do otherwise.”[147] Others, such as bioethicist Stephen Wilkinson of Keele University and Honorary Research Fellow Eve Garrard at the University of Manchester, claim that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral. In a co-authored publication by Keele University, they stated that “[e]ugenics doesn’t seem always to be immoral, and so the fact that PGD, and other forms of selective reproduction, might sometimes technically be eugenic, isn’t sufficient to show that they’re wrong.”[148]

In their book published in 2000, From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals’ reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.[149]

Original position, a hypothetical situation developed by American philosopher John Rawls, has been used as an argument for negative eugenics.[150][151]

Notes

See more here:

Eugenics – Wikipedia

eugenics | Description, History, & Modern Eugenics …

Eugenics, the selection of desired heritable characteristics in order to improve future generations, typically in reference to humans. The term eugenics was coined in 1883 by British explorer and natural scientist Francis Galton, who, influenced by Charles Darwins theory of natural selection, advocated a system that would allow the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable. Social Darwinism, the popular theory in the late 19th century that life for humans in society was ruled by survival of the fittest, helped advance eugenics into serious scientific study in the early 1900s. By World War I many scientific authorities and political leaders supported eugenics. However, it ultimately failed as a science in the 1930s and 40s, when the assumptions of eugenicists became heavily criticized and the Nazis used eugenics to support the extermination of entire races.

Read More on This Topic

biological determinism: The eugenics movement

One of the most prominent movements to apply genetics to understanding social and personality traits was the eugenics movement, which originated

Although eugenics as understood today dates from the late 19th century, efforts to select matings in order to secure offspring with desirable traits date from ancient times. Platos Republic (c. 378 bce) depicts a society where efforts are undertaken to improve human beings through selective breeding. Later, Italian philosopher and poet Tommaso Campanella, in City of the Sun (1623), described a utopian community in which only the socially elite are allowed to procreate. Galton, in Hereditary Genius (1869), proposed that a system of arranged marriages between men of distinction and women of wealth would eventually produce a gifted race. In 1865 the basic laws of heredity were discovered by the father of modern genetics, Gregor Mendel. His experiments with peas demonstrated that each physical trait was the result of a combination of two units (now known as genes) and could be passed from one generation to another. However, his work was largely ignored until its rediscovery in 1900. This fundamental knowledge of heredity provided eugenicistsincluding Galton, who influenced his cousin Charles Darwinwith scientific evidence to support the improvement of humans through selective breeding.

The advancement of eugenics was concurrent with an increasing appreciation of Darwins account for change or evolution within societywhat contemporaries referred to as social Darwinism. Darwin had concluded his explanations of evolution by arguing that the greatest step humans could make in their own history would occur when they realized that they were not completely guided by instinct. Rather, humans, through selective reproduction, had the ability to control their own future evolution. A language pertaining to reproduction and eugenics developed, leading to terms such as positive eugenics, defined as promoting the proliferation of good stock, and negative eugenics, defined as prohibiting marriage and breeding between defective stock. For eugenicists, nature was far more contributory than nurture in shaping humanity.

During the early 1900s eugenics became a serious scientific study pursued by both biologists and social scientists. They sought to determine the extent to which human characteristics of social importance were inherited. Among their greatest concerns were the predictability of intelligence and certain deviant behaviours. Eugenics, however, was not confined to scientific laboratories and academic institutions. It began to pervade cultural thought around the globe, including the Scandinavian countries, most other European countries, North America, Latin America, Japan, China, and Russia. In the United States the eugenics movement began during the Progressive Era and remained active through 1940. It gained considerable support from leading scientific authorities such as zoologist Charles B. Davenport, plant geneticist Edward M. East, and geneticist and Nobel Prize laureate Hermann J. Muller. Political leaders in favour of eugenics included U.S. Pres. Theodore Roosevelt, Secretary of State Elihu Root, and Associate Justice of the Supreme Court John Marshall Harlan. Internationally, there were many individuals whose work supported eugenic aims, including British scientists J.B.S. Haldane and Julian Huxley and Russian scientists Nikolay K. Koltsov and Yury A. Filipchenko.

Galton had endowed a research fellowship in eugenics in 1904 and, in his will, provided funds for a chair of eugenics at University College, London. The fellowship and later the chair were occupied by Karl Pearson, a brilliant mathematician who helped to create the science of biometry, the statistical aspects of biology. Pearson was a controversial figure who believed that environment had little to do with the development of mental or emotional qualities. He felt that the high birth rate of the poor was a threat to civilization and that the higher races must supplant the lower. His views gave countenance to those who believed in racial and class superiority. Thus, Pearson shares the blame for the discredit later brought on eugenics.

In the United States, the Eugenics Record Office (ERO) was opened at Cold Spring Harbor, Long Island, New York, in 1910 with financial support from the legacy of railroad magnate Edward Henry Harriman. Whereas ERO efforts were officially overseen by Charles B. Davenport, director of the Station for Experimental Study of Evolution (one of the biology research stations at Cold Spring Harbor), ERO activities were directly superintended by Harry H. Laughlin, a professor from Kirksville, Missouri. The ERO was organized around a series of missions. These missions included serving as the national repository and clearinghouse for eugenics information, compiling an index of traits in American families, training fieldworkers to gather data throughout the United States, supporting investigations into the inheritance patterns of particular human traits and diseases, advising on the eugenic fitness of proposed marriages, and communicating all eugenic findings through a series of publications. To accomplish these goals, further funding was secured from the Carnegie Institution of Washington, John D. Rockefeller, Jr., the Battle Creek Race Betterment Foundation, and the Human Betterment Foundation.

Prior to the founding of the ERO, eugenics work in the United States was overseen by a standing committee of the American Breeders Association (eugenics section established in 1906), chaired by ichthyologist and Stanford University president David Starr Jordan. Research from around the globe was featured at three international congresses, held in 1912, 1921, and 1932. In addition, eugenics education was monitored in Britain by the English Eugenics Society (founded by Galton in 1907 as the Eugenics Education Society) and in the United States by the American Eugenics Society.

Following World War I, the United States gained status as a world power. A concomitant fear arose that if the healthy stock of the American people became diluted with socially undesirable traits, the countrys political and economic strength would begin to crumble. The maintenance of world peace by fostering democracy, capitalism, and, at times, eugenics-based schemes was central to the activities of the Internationalists, a group of prominent American leaders in business, education, publishing, and government. One core member of this group, the New York lawyer Madison Grant, aroused considerable pro-eugenic interest through his best-selling book The Passing of the Great Race (1916). Beginning in 1920, a series of congressional hearings was held to identify problems that immigrants were causing the United States. As the countrys eugenics expert, Harry Laughlin provided tabulations showing that certain immigrants, particularly those from Italy, Greece, and Eastern Europe, were significantly overrepresented in American prisons and institutions for the feebleminded. Further data were construed to suggest that these groups were contributing too many genetically and socially inferior people. Laughlins classification of these individuals included the feebleminded, the insane, the criminalistic, the epileptic, the inebriate, the diseasedincluding those with tuberculosis, leprosy, and syphilisthe blind, the deaf, the deformed, the dependent, chronic recipients of charity, paupers, and neer-do-wells. Racial overtones also pervaded much of the British and American eugenics literature. In 1923 Laughlin was sent by the U.S. secretary of labour as an immigration agent to Europe to investigate the chief emigrant-exporting nations. Laughlin sought to determine the feasibility of a plan whereby every prospective immigrant would be interviewed before embarking to the United States. He provided testimony before Congress that ultimately led to a new immigration law in 1924 that severely restricted the annual immigration of individuals from countries previously claimed to have contributed excessively to the dilution of American good stock.

Immigration control was but one method to control eugenically the reproductive stock of a country. Laughlin appeared at the centre of other U.S. efforts to provide eugenicists greater reproductive control over the nation. He approached state legislators with a model law to control the reproduction of institutionalized populations. By 1920, two years before the publication of Laughlins influential Eugenical Sterilization in the United States (1922), 3,200 individuals across the country were reported to have been involuntarily sterilized. That number tripled by 1929, and by 1938 more than 30,000 people were claimed to have met this fate. More than half of the states adopted Laughlins law, with California, Virginia, and Michigan leading the sterilization campaign. Laughlins efforts secured staunch judicial support in 1927. In the precedent-setting case of Buck v. Bell, Supreme Court Justice Oliver Wendell Holmes, Jr., upheld the Virginia statute and claimed, It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind.

During the 1930s eugenics gained considerable popular support across the United States. Hygiene courses in public schools and eugenics courses in colleges spread eugenic-minded values to many. A eugenics exhibit titled Pedigree-Study in Man was featured at the Chicago Worlds Fair in 193334. Consistent with the fairs Century of Progress theme, stations were organized around efforts to show how favourable traits in the human population could best be perpetuated. Contrasts were drawn between the emulative presidential Roosevelt family and the degenerate Ishmael family (one of several pseudonymous family names used, the rationale for which was not given). By studying the passage of ancestral traits, fairgoers were urged to adopt the progressive view that responsible individuals should pursue marriage ever mindful of eugenics principles. Booths were set up at county and state fairs promoting fitter families contests, and medals were awarded to eugenically sound families. Drawing again upon long-standing eugenic practices in agriculture, popular eugenic advertisements claimed it was about time that humans received the same attention in the breeding of better babies that had been given to livestock and crops for centuries.

Anti-eugenics sentiment began to appear after 1910 and intensified during the 1930s. Most commonly it was based on religious grounds. For example, the 1930 papal encyclical Casti connubii condemned reproductive sterilization, though it did not specifically prohibit positive eugenic attempts to amplify the inheritance of beneficial traits. Many Protestant writings sought to reconcile age-old Christian warnings about the heritable sins of the father to pro-eugenic ideals. Indeed, most of the religion-based popular writings of the period supported positive means of improving the physical and moral makeup of humanity.

In the early 1930s Nazi Germany adopted American measures to identify and selectively reduce the presence of those deemed to be socially inferior through involuntary sterilization. A rhetoric of positive eugenics in the building of a master race pervaded Rassenhygiene (racial hygiene) movements. When Germany extended its practices far beyond sterilization in efforts to eliminate the Jewish and other non-Aryan populations, the United States became increasingly concerned over its own support of eugenics. Many scientists, physicians, and political leaders began to denounce the work of the ERO publicly. After considerable reflection, the Carnegie Institution formally closed the ERO at the end of 1939.

During the aftermath of World War II, eugenics became stigmatized such that many individuals who had once hailed it as a science now spoke disparagingly of it as a failed pseudoscience. Eugenics was dropped from organization and publication names. In 1954 Britains Annals of Eugenics was renamed Annals of Human Genetics. In 1972 the American Eugenics Society adopted the less-offensive name Society for the Study of Social Biology. Its publication, once popularly known as the Eugenics Quarterly, had already been renamed Social Biology in 1969.

U.S. Senate hearings in 1973, chaired by Sen. Ted Kennedy, revealed that thousands of U.S. citizens had been sterilized under federally supported programs. The U.S. Department of Health, Education, and Welfare proposed guidelines encouraging each state to repeal their respective sterilization laws. Other countries, most notably China, continue to support eugenics-directed programs openly in order to ensure the genetic makeup of their future.

Despite the dropping of the term eugenics, eugenic ideas remained prevalent in many issues surrounding human reproduction. Medical genetics, a post-World War II medical specialty, encompasses a wide range of health concerns, from genetic screening and counseling to fetal gene manipulation and the treatment of adults suffering from hereditary disorders. Because certain diseases (e.g., hemophilia and Tay-Sachs disease) are now known to be genetically transmitted, many couples choose to undergo genetic screening, in which they learn the chances that their offspring have of being affected by some combination of their hereditary backgrounds. Couples at risk of passing on genetic defects may opt to remain childless or to adopt children. Furthermore, it is now possible to diagnose certain genetic defects in the unborn. Many couples choose to terminate a pregnancy that involves a genetically disabled offspring. These developments have reinforced the eugenic aim of identifying and eliminating undesirable genetic material.

Counterbalancing this trend, however, has been medical progress that enables victims of many genetic diseases to live fairly normal lives. Direct manipulation of harmful genes is also being studied. If perfected, it could obviate eugenic arguments for restricting reproduction among those who carry harmful genes. Such conflicting innovations have complicated the controversy surrounding what many call the new eugenics. Moreover, suggestions for expanding eugenics programs, which range from the creation of sperm banks for the genetically superior to the potential cloning of human beings, have met with vigorous resistance from the public, which often views such programs as unwarranted interference with nature or as opportunities for abuse by authoritarian regimes.

Applications of the Human Genome Project are often referred to as Brave New World genetics or the new eugenics, in part because they have helped to dramatically increase knowledge of human genetics. In addition, 21st-century technologies such as gene editing, which can potentially be used to treat disease or to alter traits, have further renewed concerns. However, the ethical, legal, and social implications of such tools are monitored much more closely than were early 20th-century eugenics programs. Applications generally are more focused on the reduction of genetic diseases than on improving intelligence.

Still, with or without the use of the term, many eugenics-related concerns are reemerging as a new group of individuals decide how to regulate the application of genetics science and technology. This gene-directed activity, in attempting to improve upon nature, may not be that distant from what Galton implied in 1909 when he described eugenics as the study of agencies, under social control, which may improve or impair future generations.

Excerpt from:

eugenics | Description, History, & Modern Eugenics …

Introduction to Eugenics – Genetics Generation

Introduction to Eugenics

Eugenics is a movement that is aimed at improving the genetic composition of the human race. Historically, eugenicists advocated selective breeding to achieve these goals. Today we have technologies that make it possible to more directly alter the genetic composition of an individual. However, people differ in their views on how to best (and ethically) use this technology.

History of Eugenics

Logo of the Second International Congress of Eugenics, 1921. Image courtesy of Wikimedia Commons.

In 1883, Sir Francis Galton, a respected British scholar and cousin of Charles Darwin,first used the term eugenics, meaning well-born. Galton believed that the human race could help direct its future by selectively breeding individuals who have desired traits. This idea was based on Galtons study of upper class Britain. Following these studies, Galton concluded that an elite position in society was due to a good genetic makeup. While Galtons plans to improve the human race through selective breeding never came to fruition in Britain, they eventually took sinister turns in other countries.

The eugenics movement began in the U.S. in the late 19th century. However, unlike in Britain, eugenicists in the U.S. focused on efforts to stop the transmission of negative or undesirable traits from generation to generation. In response to these ideas, some US leaders, private citizens, and corporations started funding eugenical studies. This lead to the 1911 establishment of The Eugenics Records Office (ERO) in Cold Spring Harbor, New York. The ERO spent time tracking family histories and concluded that people deemed to be unfit more often came from families that were poor, low in social standing, immigrant, and/or minority. Further, ERO researchers demonstrated that the undesirable traits in these families, such as pauperism, were due to genetics, and not lack of resources.

Committees were convened to offer solutions to the problem of the growing number of undesirables in the U.S. population. Stricter immigration rules were enacted, but the most ominous resolution was a plan to sterilize unfit individuals to prevent them from passing on their negative traits. During the 20th century, a total of 33 states had sterilization programs in place. While at first sterilization efforts targeted mentally ill people exclusively, later the traits deemed serious enough to warrant sterilization included alcoholism, criminality chronic poverty, blindness, deafness, feeble-mindedness, and promiscuity. It was also not uncommon for African American women to be sterilized during other medical procedures without consent. Most people subjected to these sterilizations had no choice, and because the program was run by the government, they had little chance of escaping the procedure. It is thought that around 65,000 Americans were sterilized during this time period.

The eugenics movement in the U.S. slowly lost favor over time and was waning by the start of World War II. When the horrors of Nazi Germany became apparent, as well as Hitlers use of eugenic principles to justify the atrocities, eugenics lost all credibility as a field of study or even an ideal that should be pursued.

CLICK HERE to learn more about eugenics in modern times

Read the rest here:

Introduction to Eugenics – Genetics Generation

What is eugenics? pgEd

Eugenics is the philosophy and social movement that argues it is possible to improve the human race and society by encouraging reproduction by people or populations with desirable traits (termed positive eugenics) and discouraging reproduction by people with undesirable qualities (termed negative eugenics). The eugenics movement began in the United States in the early part of the 20th century; the United States was the first country to have a systematic program for performing sterilizations on individuals without their knowledge or against their will. It was supported and encouraged by a wide swath of people, including politicians, scientists, social reformers, prominent business leaders and other influential individuals who shared a goal of reducing the burden on society. The majority of people targeted for sterilization were deemed of inferior intelligence, particularly poor people and eventually people of color.[1]

In the early 20th century, many scientists were skeptical of the scientific underpinnings of eugenics. Eugenicists argued that parents from good stock produced healthier and intellectually superior children. They believed that traits such as poverty, shiftlessness, criminality and poor work ethic were inherited and that people of Nordic ancestry were inherently superior to other peoples, despite an obvious lack of evidence and scientific proof. However, eugenicists were able to persuade the Carnegie Institution and prestigious universities to support their work, thus legitimizing it and creating the perception that their philosophy was, in fact, science.

The eugenics movement became widely seen as a legitimate way to improve society and was supported by such people as Winston Churchill, Margaret Sanger, Theodore Roosevelt and John Harvey Kellogg. Eugenics became an academic discipline at many prominent colleges, including Harvard University, among many others. From the outset, the movement also had critics, including lawyer and civil rights advocate Clarence Darrow as well as scientists who refuted the idea that purity leads to fewer negative gene mutations. Nevertheless, between 1927 and the 1970s, there were more than 60,000 compulsory sterilizations performed in 33 states in the United States; California led the nation with over 20,000. Experts think many more sterilizations were likely performed, but not officially recorded.[2]

Adolf Hitler based some of his early ideas about eugenics on the programs practiced in the United States. He was its most infamous practitioner; the Nazis killed tens of thousands of disabled people and sterilized hundreds of thousands deemed inferior and medically unfit. After World War II and the Holocaust, the American eugenics movement was widely condemned. However, sterilization programs continued in many states until the mid-1970s.

Today, safeguards have been established to ensure that the ethical implications of new technologies are discussed and debated before being employed on a large scale. In this way, the benefits and advances arising from scientific research and medical procedures can be achieved both ethically and humanely. Examples of the efforts of the United States government to ensure that progress in science, research and technology proceeds in an ethical and socially acceptable manner include the Presidential Commission for the Study of Bioethical Issues, well known for the development of the Belmont Report, and the Ethical, Legal and Social Issues (ELSI) program housed in the National Human Genome Research Institute of the National Institutes of Health (NIH).

Many people fear that new advances in genetics could lead to a new era of eugenics. However, these advances lead to sometimes difficult ethical questions, particularly related to reproductive technologies and embryo screening. As science advances, what traits might people be able to choose or select against? Is it acceptable for prospective parents to have a say in which embryos are implanted in a womens uterus for non-medical reasons? Is it acceptable for society to dictate this decision to prospective parents? Many of the breakthroughs have saved lives and will continue to do so on a grander scale, and we, as a society, need to discuss the complex issues related to genetic technologies. Debate and discussion can be illuminating even though complete consensus about the intersection of genetics and society will be difficult.

This lesson provides students with a historical overview of the American eugenics movement and highlights some of the advances and breakthroughs that have been achieved through genetic and genomic research. Many people fear that new advances in genetics, particularly embryo screening and analysis of fetal DNA, could lead to a new era of eugenics. The goal of this lesson is for students to start discussing these topics so that they can understand the complexity of the issues and engage in conversations that contrast the dangers of eugenics with the benefits that can come from genetic information.

Download lesson plan: Word documentorPDFDownload slideshow: PowerPoint slides

Vermont Eugenics: A Documentary History

This lesson uses primary source documents to explore issues of race, gender and class in the 20th century. It is intended to extend the ideas explored in History, eugenics and genetics. The goal of this lesson is for students to use original sources to understand how the eugenics movement used propaganda to enter mainstream America to promote its agenda, and use critical thinking skills to analyze and interpret the sources.

Download lesson plan: Word documentorPDFDownload slideshow: PowerPoint slides

Visit link:

What is eugenics? pgEd

Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More

Cryptocurrency News
On the whole, cryptocurrency prices are down from our previous report on cryptos, with the market slipping on news of an exchange being hacked and a report about Bitcoin manipulation.

However, there have been two bright spots: 1) an official from the U.S. Securities and Exchange Commission (SEC) said that Ethereum is not a security, and 2) Coinbase is expanding its selection of tokens.

Let’s start with the good news.
SEC Says ETH Is Not a Security
Investors have some reason to cheer this week. A high-ranking SEC official told attendees of the Yahoo! All Markets Summit: Crypto that Ethereum and Bitcoin are not.

The post Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More appeared first on Profit Confidential.

Visit link:

Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Ripple vs SWIFT: The War Begins
While most criticisms of XRP do nothing to curb my bullish Ripple price forecast, there is one obstacle that nags at my conscience. Its name is SWIFT.

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the king of international payments.

It coordinates wire transfers across 11,000 banks in more than 200 countries and territories, meaning that in order for XRP prices to ascend to $10.00, Ripple needs to launch a successful coup. That is, and always has been, an unwritten part of Ripple’s story.

We’ve seen a lot of progress on that score. In the last three years, Ripple wooed more than 100 financial firms onto its.

The post Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More appeared first on Profit Confidential.

Here is the original post:

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Another Crypto Hack Derails Recovery
Since our last report, hackers broke into yet another cryptocurrency exchange. This time the target was Bithumb, a Korean exchange known for high-flying prices and ultra-active traders.

While the hackers made off with approximately $31.5 million in funds, the exchange is working with relevant authorities to return the stolen tokens to their respective owners. In the event that some is still missing, the exchange will cover the losses. (Source: “Bithumb Working With Other Crypto Exchanges to Recover Hacked Funds,”.

The post Cryptocurrency News: Looking Past the Bithumb Crypto Hack appeared first on Profit Confidential.

Read more:

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Cryptocurrency News: XRP Validators, Malta, and Practical Tokens

Cryptocurrency News & Market Summary
Investors finally saw some light at the end of the tunnel last week, with cryptos soaring across the board. No one quite knows what kicked off the rally—as it could have been any of the stories we discuss below—but the net result was positive.

Of course, prices won’t stay on this rocket ride forever. I expect to see a resurgence of volatility in short order, because the market is moving as a single unit. Everything is rising in tandem.

This tells me that investors are simply “buying the dip” rather than identifying which cryptos have enough real-world value to outlive the crash.

So if you want to know when.

The post Cryptocurrency News: XRP Validators, Malta, and Practical Tokens appeared first on Profit Confidential.

Continue reading here:

Cryptocurrency News: XRP Validators, Malta, and Practical Tokens