cloud computing

Making Design Thinking real

I was hired into a multidisciplinary corporate strategy team, set up by Hasso Plattner, the chairman of SAP's supervisory board, and the only co-founder still with the company, whose mission was to help SAP embrace design thinking in how it built products and processes as well as how it worked with customers. It was the best multidisciplinary team one could imagine to be part of. We were multidisciplinary to a fault where I used to joke that my team members and I had nothing in common. I am proud to be part of this journey and the impact we helped achieve. Over the years we managed to take the double quotes out of design thinking making it a default mindset and philosophy in all parts of SAP. It was a testament to the fact that any bold and audacious mission starts with a few simple steps and can be accomplished if there is a small passionate team behind it striving to make an impact.

Be part of foundation of something disruptive

Being part of the Office of CEO I worked with two CEOsHenning and Leoand their respective executive management teams. This was by far the best learning experience of my life. I got an opportunity to work across lines of businesses and got first hand exposure to intricate parts of SAPs business. As part of the corporate strategy team I also got an opportunity to work on Business Objects post-merger integration, especially the joint product vision. Some of that work led to the foundation of one of the most disruptive products SAP later released, SAP HANA.

Fuel the insane growth of SAP HANA

HANA just happened to SAP. The market and competition were not expecting us to do anything in this space. Most people at SAP didnt realize full potential of it and most customers didnt believe it could actually help them. I dont blame them. HANA was such a radically foreign concept that it created a feeling of skepticism and enthusiasm at the same time. I took on many different roles and worked extensively with various parts of organization and SAPs customers to explore, identify, and realize breakthrough scenarios that exploited the unique and differentiating aspects of HANA.

HANAs value was perceived to help customers to do things better, cheaper, and faster. But, I was on an orthogonal, and rather difficult, mission to help our customers do things they could not have done before or could not even have imagined they could do.

I was fortunate enough to significantly contribute to early adoption of HANAzero to billion dollars in revenue in three yearswhich also went on to become the fastest growing product in SAPs history. I got a chance to work closely with Vishal Sikka, the CTO of SAP and informally known as the father of HANA, on this endeavor and on many other things. It was also a pleasure to work with some of the most prominent global SAP customers who are industry leaders. They taught me a lot about their business.

Incubate a completely new class of data-first cloud solutions

As HANA started to become a foundation and platform for everything we built at SAP my team took on a customer-driven part-accelerator and a part-incubator role to further leverage the most differentiating aspects of the platform and combine it with machine learning and AI to help build new greenfield data-first cloud solutions that reimagined enterprise scenarios. These solutions created potential for more sustaining revenue in days to come.

Practice the General Manager model with a start-up mindset

A true General Manager model is rare or non-existent at SAP (and at many other ISVs), but we implemented that model in my team where I was empowered to run all the functionsengineering, design, product management, product marketing, and business developmentand assumed the overall P&L responsibility of the team. The buck stopped with me and as a team we could make swift business decisions. The team members also felt a strong purpose in how their work helped SAP. Often times, people would come up to me and say, so your team is like a start-up. I would politely tell them claiming my team as a start-up will be a great disservice to all the real start-ups out there. However, I tried very hard for us to embrace the start-up culturesmall tight teams, experimentation, rewarding efforts and not just the outcome, mission and purpose driven to a fault, break things to make them work, insanely short project timelines, and mid to long term vision with focused short-term extreme agile executionand we leveraged the biggest asset SAP has, its customers.

Be part of a transformative journey

I was fortunate to witness SAPs successful transformation to a cloud company without compromising on margins or revenue and HANA-led in-memory revolution that not only paved the path for a completely new category of software but also became the fastest growing product in SAPs history. These kind of things simply dont happen to all people and I was fortunate to be part of this journey. I have tremendous respect for SAP as a company and the leaders, especially the CEO Bill McDermott, in what the company has achieved. Im thankful to all the people who helped and mentored me, and more importantly believed in me.

Looking forward to not doing anything, at least for a short period of time

At times, such a long and fast-paced journey somewhat desensitizes you from the real world. I want to slow down, take a step back, and rethink how the current technology storm in the Silicon Valley will disrupt the world again as it has always and how I can be part of that journey, again. There are also personal projects I have been putting off for a while that I want to tend to. Im hoping a short break will help me reenergize and see the world differently. When I broke this news to my mom she didnt freak out. I must have made the right decision!

I want to disconnect to reconnect.

I am looking forward to do away with my commute for a while, on 101, during rush hours, to smell the proverbial roses. I wont miss 6 AM conference calls, but I will certainly miss those cute self-driving Google cars on streets of Palo Alto. They always remind me of why the valley is such a great place. For a product person, a technology enthusiast, and a generalist like me who has experienced and practiced all the three sidesfeasibility, viability, and desirabilityof building software the valley is full of promises and immense excitement. In coming days I am hoping to learn from my friends and thought leaders that would eventually lead me to my next tour of duty.

About the picture: I was on a hiking trip to four national parks a few years ago where I took this picture standing on the middle of a road inside Death Valley National Park. The C curve on a rather straight road is the only place on that long stretch where you could get cell phone reception. Even short hiking trips have helped me gain a new perspective on work and life.

Read more here:

cloud computing

Cloud computing – Simple English Wikipedia, the free encyclopedia

In computer science, cloud computing describes a type of outsourcing of computer services, similar to the way in which electricity supply is outsourced. Users can simply use it. They do not need to worry where the electricity is from, how it is made, or transported. Every month, they pay for what they consumed.

The idea behind cloud computing is similar: The user can simply use storage, computing power, or specially crafted development environments, without having to worry how these work internally. Cloud computing is usually Internet-based computing. The cloud is a metaphor for the Internet based on how the internet is described in computer network diagrams; which means it is an abstraction hiding the complex infrastructure of the internet.[1] It is a type of computing in which IT-related capabilities are provided as a service,[2] allowing users to access technology-enabled services from the Internet ("in the cloud")[3] without knowledge of, or control over the technologies behind these servers, which can lead to ethical and legal issues.[4]

According to a paper published by IEEE Internet Computing in 2008 "Cloud Computing is a paradigm in which information is permanently stored in servers on the Internet and cached temporarily on clients that include computers, laptops, handhelds, sensors, etc."[5] This concept was first introduced by Cynthia Carter of DataNet, Inc. (https://www.slideshare.net/slideshow/embed_code/key/oQ67EY4it49b0s).

Cloud computing is a general concept that utilizes software as a service (SaaS), such as Web 2.0 and )other technology trends, all of which depend on the Internet for satisfying users' needs. For example, Google Apps provides common business applications online that are accessed from a web browser, while the software and data are stored on the Internet servers.

Cloud computing is often confused with other ideas:

Cloud computing often uses grid computing, has autonomic characteristics and is billed like utilities, but cloud computing can be seen as a natural next step from the grid-utility model.[8] Some successful cloud architectures have little or no centralised infrastructure or billing systems including peer-to-peer networks like BitTorrent and Skype.[9]

The majority of cloud computing infrastructure currently consists of reliable services delivered through data centers that are built on computer and storage virtualization technologies. The services are accessible anywhere in the world, with The Cloud appearing as a single point of access for all the computing needs of consumers. Commercial offerings need to meet the quality of service requirements of customers and typically offer service level agreements.[10] Open standards and open source software are also critical to the growth of cloud computing.[11]

As customers generally do not own the infrastructure or know all details about it, mainly they are accessing or renting, so they can consume resources as a service, and may be paying for what they do not need, instead of what they actually do need to use. Many cloud computing providers use the utility computing model which is analogous to how traditional public utilities like electricity are consumed, while others are billed on a subscription basis. By sharing consumable and "intangible" computing power between multiple "tenants", utilization rates can be improved (as servers are not left idle) which can reduce costs significantly while increasing the speed of application development.

A side effect of this approach is that "computer capacity rises dramatically" as customers do not have to engineer for peak loads.[12] Adoption has been enabled by "increased high-speed bandwidth" which makes it possible to receive the same response times from centralized infrastructure at other sites.

Cloud computing is being driven by providers including Google, Amazon.com, and Yahoo! as well as traditional vendors including IBM, Intel,[13] Microsoft[14] and SAP.[15] It can adopted by all kinds of users, be they individuals or large enterprises. Most internet users are currently using cloud services, even if they do not realize it. Webmail for example is a cloud service, as are Facebook and Wikipedia and contact list synchronization and online data backups.

The Cloud[16] is a metaphor for the Internet,[17] or more generally components and services which are managed by others.[1]

The underlying concept dates back to 1960 when John McCarthy expressed his opinion that "computation may someday be organized as a public utility" and the term Cloud was already in commercial use in the early 1990s to refer to large ATM networks.[18] By the turn of the 21st century, cloud computing solutions had started to appear on the market,[19] though most of the focus at this time was on Software as a service.

Amazon.com played a key role in the development of cloud computing when upgrading their data centers after the dot-com bubble and providing access to their systems by way of Amazon Web Services in 2002 on a utility computing basis. They found the new cloud architecture resulted in significant internal efficiency improvements.[20]

2007 observed increased activity, including Google, IBM and a number of universities starting large scale cloud computing research project,[21] around the time the term started gaining popularity in the mainstream press. It was a hot topic by mid-2008 and numerous cloud computing events had been scheduled.[22]

In August 2008 Gartner observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" and that the "projected shift to cloud computing will result in dramatic growth in IT products in some areas and in significant reductions in other areas".[23]

Clouds cross many country borders and "may be the ultimate form of globalisation".[24] As such it is the subject of complex geopolitical issues, whereby providers must satisfy many legal restrictions in order to deliver service to a global market. This dates back to the early days of the Internet, where libertarian thinkers felt that "cyberspace was a distinct place calling for laws and legal institutions of its own"; author Neal Stephenson envisaged this as a tiny island data haven in his science-fiction classic novel Cryptonomicon.[24]

Although there have been efforts to match the legal environment (such as US-EU Safe Harbor), providers like Amazon Web Services usually deal with international markets (typically the United States and European Union) by deploying local infrastructure and allowing customers to select their countries.[25] However, there are still concerns about security and privacy for individual through various governmental levels, (for example the USA PATRIOT Act and use of national security letters and title II of the Electronic Communications Privacy Act, the Stored Communications Act).

In March 2007, Dell applied to trademark the term '"cloud computing" in the United States. It received a "Notice of Allowance" in July 2008 which was subsequently canceled on August 6, resulting in a formal rejection of the trademark application in less than a week later.

In November 2007, the Free Software Foundation released the Affero General Public License (abbreviated as Affero GPL and AGPL), a version of GPLv3 designed to close a perceived legal loophole associated with Free software designed to be run over a network, particularly software as a service. According to the AGPL license application service providers are required to release any changes they make to an AGPL open source code.

Cloud architecture[26] is the systems architecture of the software systems involved in the delivery of cloud computing (e.g. hardware, software) as designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces (usually web services).[27]

This is very similar to the Unix philosophy of having multiple programs doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.

Cloud architecture extends to the client where web browsers and/or software applications are used to access cloud applications.

Cloud storage architecture is loosely coupled where metadata operations are centralized enabling the data nodes to scale into the hundreds, each independently delivering data to applications or users.

A cloud application influences The Cloud model of software architecture, often eliminating the need to install and run the application on the customer's own computer, thus reducing software maintenance, ongoing operations, and support. For example:

A cloud client is computer hardware and/or computer software which relies on The Cloud for application delivery, or which is specifically designed for delivery of cloud services, and which is in either case essentially useless without a Cloud.[33] For example:

Cloud infrastructure (e.g. Infrastructure as a service) is the delivery of computer infrastructure (typically a platform virtualization environment) as a service.[41] For example:

A cloud platform (e.g. Platform as a service) (the delivery of a computing platform and/or solution stack as a service) [42] facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers.[43] For example:

A cloud service (e.g. Web Service) is "software system[s] designed to support interoperable machine-to-machine interaction over a network"[44] which may be accessed by other cloud computing components, software (e.g. Software plus services) or end users directly.[45] For example:

Cloud storage is the delivery of data storage as a service (including database-like services), often billed on a utility computing basis (e.g. per gigabyte per month).[46] For example:

Traditional storage vendors have recently begun to offer their own flavor of cloud storage, sometimes in conjunction with their existing software products (e.g. Symantec's Online Storage for Backup Exec). Others focus on providing a new kind of back-end storage optimally designed for delivering cloud storage (EMC's Atmos), categorically known as Cloud Optimized Storage.

A cloud computing provider or cloud computing service provider owns and operates cloud computing systems serve someone else. Usually this needs building and managing new data centers. Some organisations get some of the benefits of cloud computing by becoming "internal" cloud providers and servicing themselves, though they do not benefit from the same economies of scale and still have to engineer for peak loads. The barrier to entry is also significantly higher with capital expenditure required and billing and management creates some overhead. However, significant operational efficiency and quickness advantages can be achieved even by small organizations, and server consolidation and virtualization rollouts are already in progress.[47] Amazon.com was the first such provider, modernising its data centers which, like most computer networks were using as little as 10% of its capacity at any one time just to leave room for occasional spikes. This allowed small, fast-moving groups to add new features faster and easier, and they went on to open it up to outsiders as Amazon Web Services in 2002 on a utility computing basis.[20]

The companies listed in the Components section are providers.

A user is a consumer of cloud computing.[33] The privacy of users in cloud computing has become of increasing concern.[48][49] The rights of users is also an issue, which is being addressed via a community effort to create a bill of rights (currently in draft).[50][51]

A vendor sells products and services that facilitate the delivery, adoption and use of cloud computing.[52] For example:

A cloud standard is one of a number of existing (typically lightweight) open standards that have facilitated the growth of cloud computing, including:[57]

Read the original here:

Cloud computing - Simple English Wikipedia, the free encyclopedia

Cloud computing | computer science | Britannica.com

Cloud computing, method of running application software and storing related data in central computer systems and providing customers or other users access to them through the Internet.

The origin of the expression cloud computing is obscure, but it appears to derive from the practice of using drawings of stylized clouds to denote networks in diagrams of computing and communications systems. The term came into popular use in 2008, though the practice of providing remote access to computing functions through networks dates back to the mainframe time-sharing systems of the 1960s and 1970s. In his 1966 book The Challenge of the Computer Utility, the Canadian electrical engineer Douglas F. Parkhill predicted that the computer industry would come to resemble a public utility in which many remotely located users are connected via communication links to a central computing facility.

For decades, efforts to create large-scale computer utilities were frustrated by constraints on the capacity of telecommunications networks such as the telephone system. It was cheaper and easier for companies and other organizations to store data and run applications on private computing systems maintained within their own facilities.

The constraints on network capacity began to be removed in the 1990s when telecommunications companies invested in high-capacity fibre-optic networks in response to the rapidly growing use of the Internet as a shared network for exchanging information. In the late 1990s, a number of companies, called application service providers (ASPs), were founded to supply computer applications to companies over the Internet. Most of the early ASPs failed, but their model of supplying applications remotely became popular a decade later, when it was renamed cloud computing.

Cloud computing encompasses a number of different services. One set of services, sometimes called software as a service (SaaS), involves the supply of a discrete application to outside users. The application can be geared either to business users (such as an accounting application) or to consumers (such as an application for storing and sharing personal photographs). Another set of services, variously called utility computing, grid computing, and hardware as a service (HaaS), involves the provision of computer processing and data storage to outside users, who are able to run their own applications and store their own data on the remote system. A third set of services, sometimes called platform as a service (PaaS), involves the supply of remote computing capacity along with a set of software-development tools for use by outside software programmers.

Early pioneers of cloud computing include Salesforce.com, which supplies a popular business application for managing sales and marketing efforts; Google, Inc., which in addition to its search engine supplies an array of applications, known as Google Apps, to consumers and businesses; and Amazon Web Services, a division of online retailer Amazon.com, which offers access to its computing system to Web-site developers and other companies and individuals. Cloud computing also underpins popular social networks and other online media sites such as Facebook, MySpace, and Twitter. Traditional software companies, including Microsoft Corporation, Apple Inc., Intuit Inc., and Oracle Corporation, have also introduced cloud applications.

Cloud-computing companies either charge users for their services, through subscriptions and usage fees, or provide free access to the services and charge companies for placing advertisements in the services. Because the profitability of cloud services tends to be much lower than the profitability of selling or licensing hardware components and software programs, it is viewed as a potential threat to the businesses of many traditional computing companies.

Construction of the large data centres that run cloud-computing services often requires investments of hundreds of millions of dollars. The centres typically contain thousands of server computers networked together into parallel-processing or grid-computing systems. The centres also often employ sophisticated virtualization technologies, which allow computer systems to be divided into many virtual machines that can be rented temporarily to customers. Because of their intensive use of electricity, the centres are often located near hydroelectric dams or other sources of cheap and plentiful electric power.

Because cloud computing involves the storage of often sensitive personal or commercial information in central database systems run by third parties, it raises concerns about data privacy and security as well as the transmission of data across national boundaries. It also stirs fears about the eventual creation of data monopolies or oligopolies. Some believe that cloud computing will, like other public utilities, come to be heavily regulated by governments.

Follow this link:

Cloud computing | computer science | Britannica.com

Human Genetic Engineering – Probe Ministries

Although much has occurred in this field since this article was written in 2000, the questions addressed by Dr. Bohlin are still timely and relevant. Is manipulating our genetic code simply a tool or does it deal with deeper issues? Dealing with genetic engineering must be done within the context of the broader ethical and theological issues involved. In the article, Dr. Bohlin provides an excellent summary driven from his biblical worldview perspective.

Genetic technology harbors the potential to change the human species forever. The soon to be completed Human Genome Project will empower genetic scientists with a human biological instruction book. The genes in all our cells contain the code for proteins that provide the structure and function to all our tissues and organs. Knowing this complete code will open new horizons for treating and perhaps curing diseases that have remained mysteries for millennia. But along with the commendable and compassionate use of genetic technology comes the specter of both shadowy purposes and malevolent aims.

For some, the potential for misuse is reason enough for closing the door completelythe benefits just arent worth the risks. In this article, Id like to explore the application of genetic technology to human beings and apply biblical wisdom to the eventual ethical quagmires that are not very far away. In this section well investigate the various ways humans can be engineered.

Since we have introduced foreign genes into the embryos of mice, cows, sheep, and pigs for years, theres no technological reason to suggest that it cant be done in humans too. Currently, there are two ways of pursuing gene transfer. One is simply to attempt to alleviate the symptoms of a genetic disease. This entails gene therapy, attempting to transfer the normal gene into only those tissues most affected by the disease. For instance, bronchial infections are the major cause of early death for patients with cystic fibrosis (CF). The lungs of CF patients produce thick mucus that provides a great growth medium for bacteria and viruses. If the normal gene can be inserted in to the cells of the lungs, perhaps both the quality and quantity of their life can be enhanced. But this is not a complete cure and they will still pass the CF gene on to their children.

In order to cure a genetic illness, the defective gene must be replaced throughout the body. If the genetic defect is detected in an early embryo, its possible to add the gene at this stage, allowing the normal gene to be present in all tissues including reproductive tissues. This technique has been used to add foreign genes to mice, sheep, pigs, and cows.

However, at present, no laboratory is known to be attempting this well-developed technology in humans. Princeton molecular biologist Lee Silver offers two reasons.{1} First, even in animals, it only works 50% of the time. Second, even when successful, about 5% of the time, the new gene gets placed in the middle of an existing gene, creating a new mutation. Currently these odds are not acceptable to scientists and especially potential clients hoping for genetic engineering of their offspring. But these are only problems of technique. Its reasonable to assume that these difficulties can be overcome with further research.

The primary use for human genetic engineering concerns the curing of genetic disease. But even this should be approached cautiously. Certainly within a Christian worldview, relieving suffering wherever possible is to walk in Jesus footsteps. But what diseases? How far should our ability to interfere in life be allowed to go? So far gene therapy is primarily tested for debilitating and ultimately fatal diseases such as cystic fibrosis.

The first gene therapy trial in humans corrected a life-threatening immune disorder in a two-year-old girl who, now ten years later, is doing well. The gene therapy required dozens of applications but has saved the family from a $60,000 per year bill for necessary drug treatment without the gene therapy.{2} Recently, sixteen heart disease patients, who were literally waiting for death, received a solution containing copies of a gene that triggers blood vessel growth by injection straight into the heart. By growing new blood vessels around clogged arteries, all sixteen showed improvement and six were completely relieved of pain.

In each of these cases, gene therapy was performed as a last resort for a fatal condition. This seems to easily fall within the medical boundaries of seeking to cure while at the same time causing no harm. The problem will arise when gene therapy will be sought to alleviate a condition that is less than life-threatening and perhaps considered by some to simply be one of lifes inconveniences, such as a gene that may offer resistance to AIDS or may enhance memory. Such genes are known now and many are suggesting that these goals will and should be available for gene therapy.

The most troublesome aspect of gene therapy has been determining the best method of delivering the gene to the right cells and enticing them to incorporate the gene into the cells chromosomes. Most researchers have used crippled forms of viruses that naturally incorporate their genes into cells. The entire field of gene therapy was dealt a severe setback in September 1999 upon the death of Jesse Gelsinger who had undergone gene therapy for an inherited enzyme deficiency at the University of Pennsylvania.{3} Jesse apparently suffered a severe immune reaction and died four days after being injected with the engineered virus.

The same virus vector had been used safely in thousands of other trials, but in this case, after releasing stacks of clinical data and answering questions for two days, the researchers didnt fully understand what had gone wrong.{4} Other institutions were also found to have failed to file immediate reports as required of serious adverse events in their trials, prompting a congressional review.{5} All this should indicate that the answers to the technical problems of gene therapy have not been answered and progress will be slowed as guidelines and reporting procedures are studied and reevaluated.

The simple answer is no, at least for the foreseeable future. Gene therapy currently targets existing tissue in a existing child or adult. This may alleviate or eliminate symptoms in that individual, but will not affect future children. To accomplish a correction for future generations, gene therapy would need to target the germ cells, the sperm and egg. This poses numerous technical problems at the present time. There is also a very real concern about making genetic decisions for future generations without their consent.

Some would seek to get around these difficulties by performing gene therapy in early embryos before tissue differentiation has taken place. This would allow the new gene to be incorporated into all tissues, including reproductive organs. However, this process does nothing to alleviate the condition of those already suffering from genetic disease. Also, as mentioned earlier this week, this procedure would put embryos at unacceptable risk due to the inherent rate of failure and potential damage to the embryo.

Another way to affect germ line gene therapy would involve a combination of gene therapy and cloning.{6} An embryo, fertilized in vitro, from the sperm and egg of a couple at risk for sickle-cell anemia, for example, could be tested for the sickle-cell gene. If the embryo tests positive, cells could be removed from this early embryo and grown in culture. Then the normal hemoglobin gene would be added to these cultured cells.

If the technique for human cloning could be perfected, then one of these cells could be cloned to create a new individual. If the cloning were successful, the resulting baby would be an identical twin of the original embryo, only with the sickle-cell gene replaced with the normal hemoglobin gene. This would result in a normal healthy baby. Unfortunately, the initial embryo was sacrificed to allow the engineering of its identical twin, an ethically unacceptable trade-off.

So what we have seen, is that even human gene therapy is not a long-term solution, but a temporary and individual one. But even in condoning the use of gene therapy for therapeutic ends, we need to be careful that those for whom gene therapy is unavailable either for ethical or monetary reasons, dont get pushed aside. It would be easy to shun those with uncorrected defects as less than desirable or even less than human. There is, indeed, much to think about.

The possibility of someone or some government utilizing the new tools of genetic engineering to create a superior race of humans must at least be considered. We need to emphasize, however, that we simply do not know what genetic factors determine popularly desired traits such as athletic ability, intelligence, appearance and personality. For sure, each of these has a significant component that may be available for genetic manipulation, but its safe to say that our knowledge of each of these traits is in its infancy.

Even as knowledge of these areas grows, other genetic qualities may prevent their engineering. So far, few genes have only a single application in the body. Most genes are found to have multiple effects, sometimes in different tissues. Therefore, to engineer a gene for enhancement of a particular traitsay memorymay inadvertently cause increased susceptibility to drug addiction.

But what if in the next 50 to 100 years, many of these unknowns can be anticipated and engineering for advantageous traits becomes possible. What can we expect? Our concern is that without a redirection of the worldview of the culture, there will be a growing propensity to want to take over the evolution of the human species. The many people see it, we are simply upright, large-brained apes. There is no such thing as an independent mind. Our mind becomes simply a physical construct of the brain. While the brain is certainly complicated and our level of understanding of its intricate machinery grows daily, some hope that in the future we may comprehend enough to change who and what we are as a species in order to meet the future demands of survival.

Edward O. Wilson, a Harvard entomologist, believes that we will soon be faced with difficult genetic dilemmas. Because of expected advances in gene therapy, we will not only be able to eliminate or at least alleviate genetic disease, we may be able to enhance certain human abilities such as mathematics or verbal ability. He says, Soon we must look deep within ourselves and decide what we wish to become.{7} As early as 1978, Wilson reflected on our eventual need to decide how human we wish to remain.{8}

Surprisingly, Wilson predicts that future generations will opt only for repair of disabling disease and stop short of genetic enhancements. His only rationale however, is a question. Why should a species give up the defining core of its existence, built by millions of years of biological trial and error?{9} Wilson is naively optimistic. There are loud voices already claiming that man can intentionally engineer our evolutionary future better than chance mutations and natural selection. The time to change the course of this slow train to destruction is now, not later.

Many of the questions surrounding the ethical use of genetic engineering practices are difficult to answer with a simple yes or no. This is one of them. The answer revolves around the method used to determine the sex selection and the timing of the selection itself.

For instance, if the sex of a fetus is determined and deemed undesirable, it can only be rectified by termination of the embryo or fetus, either in the lab or in the womb by abortion. There is every reason to prohibit this process. First, an innocent life has been sacrificed. The principle of the sanctity of human life demands that a new innocent life not be killed for any reason apart from saving the life of the mother. Second, even in this country where abortion is legal, one would hope that restrictions would be put in place to prevent the taking of a life simply because its the wrong sex.

However, procedures do exist that can separate sperm that carry the Y chromosome from those that carry the X chromosome. Eggs fertilized by sperm carrying the Y will be male, and eggs fertilized by sperm carrying the X will be female. If the sperm sample used to fertilize an egg has been selected for the Y chromosome, you simply increase the odds of having a boy (~90%) over a girl. So long as the couple is willing to accept either a boy or girl and will not discard the embryo or abort the baby if its the wrong sex, its difficult to say that such a procedure should be prohibited.

One reason to utilize this procedure is to reduce the risk of a sex-linked genetic disease. Color-blindness, hemophilia, and fragile X syndrome can be due to mutations on the X chromosome. Therefore, males (with only one X chromosome) are much more likely to suffer from these traits when either the mother is a carrier or the father is affected. (In females, the second X chromosome will usually carry the normal gene, masking the mutated gene on the other X chromosome.) Selecting for a girl by sperm selection greatly reduces the possibility of having a child with either of these genetic diseases. Again, its difficult to argue against the desire to reduce suffering when a life has not been forfeited.

But we must ask, is sex determination by sperm selection wise? A couple that already has a boy and simply wants a girl to balance their family, seems innocent enough. But why is this important? What fuels this desire? Its dangerous to take more and more control over our lives and leave the sovereignty of God far behind. This isnt a situation of life and death or even reducing suffering.

But while it may be difficult to find anything seriously wrong with sex selection, its also difficult to find anything good about it. Even when the purpose may be to avoid a sex-linked disease, we run the risk of communicating to others affected by these diseases that because they could have been avoided, their life is somehow less valuable. So while it may not be prudent to prohibit such practices, it certainly should not be approached casually either.

Notes

1. Lee Silver, Remaking Eden: Cloning and Beyond in a Brave New World, New York, NY: Avon Books, p. 230-231. 2. Leon Jaroff, Success stories, Time, 11 January 1999, p. 72-73. 3. Sally Lehrman, Virus treatment questioned after gene therapy death, Nature Vol. 401 (7 October 1999): 517-518. 4. Eliot Marshall, Gene therapy death prompts review of adenovirus vector, Science Vol. 286 (17 December 1999): 2244-2245. 5. Meredith Wadman, NIH under fire over gene-therapy trials, Nature Vol. 403 (20 January 1999): 237. 6. Steve Mirsky and John Rennie, What cloning means for gene therapy, Scientific American, June 1997, p. 122-123. 7. Ibid., p. 277. 8. Edward Wilson, On Human Nature, Cambridge, Mass.: Harvard University Press, p. 6. 9. E. Wilson, Consilience, p. 277.

2000 Probe Ministries

On January 8, 2007, the Associated Press reported that scientists from Wake Forest University and Harvard University discovered a new type of stem cell found in the amniotic fluid within

Genetic Diseases The age of genetics has arrived. Society is in the midst of a genetic revolution that some futurists predict will have a greater impact on the culture than

Go here to see the original:

Human Genetic Engineering - Probe Ministries

Human Genetic Engineering | APNORC.org | APNORC.org

Americans favor the use of gene editing to prevent disease or disabilities, while there is strong opposition to using the technology to change a babys physical characteristics, such eye color or intelligence. Support for eradicating disease and disabilities was strong regardless of party identification, education or religious preference.The same holds true for the opposition to altering genes in order to change physical features or capabilities.

Americans hold similar views about the ethics of gene editing.About 6 in 10 consider editing the genes of embryos for the purpose of preventing or reducing the risk of disease to be morally acceptable.Fifty-four percent say using the technology to prevents a non-fatal condition such as blindness as morally acceptable.Two-thirds say it is morally unacceptable to use gene editing to change a babys physical features or characteristics.

What about altering an adults genetic material without changing the genes of their offspring?The idea of using gene editing technology to prevent or cure a genetic disorder in an adult is supported by 56 percent, opposed by 17 percent, and 27 percent neither favor nor oppose.

While Americans favor using gene editing to deal with physical ailments, there is less support for the use of taxpayer money to finance testing on human embryos to develop the technology. Overall, 48 percent oppose federal funding to test gene editing technology, while 26 percent favor it and 25 percent neither favor nor oppose. Republicans are particularly against using government money for the development of gene editing.

Regardless of support for the technology, there are some concerns about possible ramifications.Fifty-two percent say the unethical use of gene editing is very likely, and 45 percent think it's very likely the technology would have unintended effects on human evolution. Few think it's likely that most people would be able to afford the technology.

Most Americans say it is at least somewhat likely that the development of gene editing technology will lead to further medical advances, eliminate many genetic illnesses, and be adequately tested.

The nationwide poll was conducted December 13-16, 2018 using the AmeriSpeak Panel, the probability-based panel of NORC at the University of Chicago. Online and telephone interviews using landlines and cell phones were conducted with 1,067 adults. The margin of sampling error is plus or minus 4.1 percentage points.

Read the original:

Human Genetic Engineering | APNORC.org | APNORC.org

Surviving Survivalism | Commonweal Magazine

The book-blurb version of Educated ends here, a Hillbilly Elegy-meets-Pygmalion tale of an improbable intellectual coming-of-age. However, Westover digs deeper in this memoir. She wants to tell the story of her soul, not her accomplishments, and she writes surprisingly little of her life at Cambridge. Instead, like a tongue probing a sore tooth, her narrative returns to Bucks Peak, where her identity still lies. How can she break from her family without losing herself?

The answer does not come easily. Throughout her teens and into her graduate-school years, Westover suffers tremendous physical and psychological abuse at the hands of her older brother Shawn. He rages violently, and when Tara starts to wear makeup and date, she bears the brunt of his manic outbursts, which often leave her with broken bones and bruises. Westovers encounter, in college, with early feminist thinkers like John Stuart Mill and Mary Wollstonecraft, gives her a language to understand this experience. When she brings the abuse to her fathers attention, however, he refuses to acknowledge it, and eventually, the entire family, except for one brother, turns on her. Isolated and on the verge of psychological collapse while on a fellowship at Harvard, Westover begins to doubt her own memoryperhaps she imagined the abuse?

If we are made to mistrust our memories, how do we know who we are? Westovers entire memoir wrestles with this question. In doing so, she follows the lead of the Wests first autobiographer, St. Augustine. In Book 10 of the Confessions, he ties memory to identity and, specifically, to language:

[W]hen a true account is given of past events, what is brought forth from the memory is not the events themselves, which have passed away, but words formed from images of those events which as they happened and went on their way left some kind of traces in the mind.

It is fitting, then, that writing in her journal saves Tara Westover. After one particularly humiliating incident that involves Shawn dragging her through a parking lot, she decides, for the first time, to record the abuse in her journalnot just in the vague, shadowy language she usually uses to conceal the abuse from herself, but in terms of what actually happened. This action, she later writes, would change everything.

Even when it occurs in a private journal, writing is a communal activity, for the simple reason that language itself is. Putting her private experience into language enables Westovers subjectivity to become objectivity; she cannot erase its meaning, no matter how much she would like to. Writing the truth helps her realize that her voice might be as strong as the other ones that had narrated her life to that point.

Honesty defines Westovers voice, and redeems her book from occasional lapses into clich. When she recounts scrawling verses of Bob Marleys Redemption Song into her notebook for inspiration, readers are tempted to roll their eyes (attending a college party or two has a way of removing any illusions of Marley as a prophet). But to Westover, for whom the singer is an unknown until that point, Emancipate yourself from mental slavery is as fresh and charged with meaning as any line from Locke, Hume, or Rousseau. And though she has every reason to turn her family into villains, she tempers their faults with genuine affection, even for Shawn, whose violent paroxysms were often followed by moments of poignant tenderness.

For the ancient Greeks, education implied much more than our modern conception of receiving information or gaining experience. It meant entering into the patterns of the larger community, so that one becomes an individual only by learning from others and from the past. The Greeks called this process paideia, and it is precisely this concept that Homers Cyclopes lacked. Their caves may have granted them autonomy, but they were not individuals, because they lacked culture and community.

Educated is Tara Westovers account of becoming an individual through paideia. She ventures out into the world to discover her identity, and finds it only by making herself vulnerable to the truth, no matter where it lies or how painful it is. Her memoir provides a captivating account of her gradual discovery of an essentially Catholic truththat we exist in relation to others and to the world around us.

EducatedA MemoirTara WestoverRandom House, $28, 352 pp.

Read this article:

Surviving Survivalism | Commonweal Magazine

Immortality | philosophy and religion | Britannica.com

Immortality, in philosophy and religion, the indefinite continuation of the mental, spiritual, or physical existence of individual human beings. In many philosophical and religious traditions, immortality is specifically conceived as the continued existence of an immaterial soul or mind beyond the physical death of the body.

Read More on This Topic

Christianity: The immortality of the soul

Human beings seem always to have had some notion of a shadowy double that survives the death of the body. But the idea of the soul as a

The earlier anthropologists, such as Sir Edward Burnett Tylor and Sir James George Frazer, assembled convincing evidence that the belief in a future life was widespread in the regions of primitive culture. Among most peoples the belief has continued through the centuries. But the nature of future existence has been conceived in very different ways. As Tylor showed, in the earliest known times there was little, often no, ethical relation between conduct on earth and the life beyond. Morris Jastrow wrote of the almost complete absence of all ethical considerations in connection with the dead in ancient Babylonia and Assyria.

In some regions and early religious traditions, it came to be declared that warriors who died in battle went to a place of happiness. Later there was a general development of the ethical idea that the afterlife would be one of rewards and punishments for conduct on earth. So in ancient Egypt at death the individual was represented as coming before judges as to that conduct. The Persian followers of Zoroaster accepted the notion of Chinvat peretu, or the Bridge of the Requiter, which was to be crossed after death and which was broad for the righteous and narrow for the wicked, who fell from it into hell. In Indian philosophy and religion, the steps upwardor downwardin the series of future incarnated lives have been (and still are) regarded as consequences of conduct and attitudes in the present life (see karma). The idea of future rewards and punishments was pervasive among Christians in the Middle Ages and is held today by many Christians of all denominations. In contrast, many secular thinkers maintain that the morally good is to be sought for itself and evil shunned on its own account, irrespective of any belief in a future life.

That the belief in immortality has been widespread through history is no proof of its truth. It may be a superstition that arose from dreams or other natural experiences. Thus, the question of its validity has been raised philosophically from the earliest times that people began to engage in intelligent reflection. In the Hindu Katha Upanishad, Naciketas says: This doubt there is about a man departedsome say: He is; some: He does not exist. Of this would I know. The Upanishadsthe basis of most traditional philosophy in Indiaare predominantly a discussion of the nature of humanity and its ultimate destiny.

Immortality was also one of the chief problems of Platos thought. With the contention that reality, as such, is fundamentally spiritual, he tried to prove immortality, maintaining that nothing could destroy the soul. Aristotle conceived of reason as eternal but did not defend personal immortality, as he thought the soul could not exist in a disembodied state. The Epicureans, from a materialistic standpoint, held that there is no consciousness after death, and it is thus not to be feared. The Stoics believed that it is the rational universe as a whole that persists. Individual humans, as the Roman emperor Marcus Aurelius wrote, simply have their allotted periods in the drama of existence. The Roman orator Cicero, however, finally accepted personal immortality. St. Augustine of Hippo, following Neoplatonism, regarded human beings souls as being in essence eternal.

The Islamic philosopher Avicenna declared the soul immortal, but his coreligionist Averros, keeping closer to Aristotle, accepted the eternity only of universal reason. St. Albertus Magnus defended immortality on the ground that the soul, in itself a cause, is an independent reality. John Scotus Erigena contended that personal immortality cannot be proved or disproved by reason. Benedict de Spinoza, taking God as ultimate reality, as a whole maintained his eternity but not the immortality of individual persons within him. The German philosopher Gottfried Wilhelm Leibniz contended that reality is constituted of spiritual monads. Human beings, as finite monads, not capable of origination by composition, are created by God, who could also annihilate them. However, because God has planted in humans a striving for spiritual perfection, there may be faith that he will ensure their continued existence, thus giving them the possibility to achieve it.

The French mathematician and philosopher Blaise Pascal argued that belief in the God of Christianityand accordingly in the immortality of the soulis justified on practical grounds by the fact that one who believes has everything to gain if he is right and nothing to lose if he is wrong, while one who does not believe has everything to lose if he is wrong and nothing to gain if he is right. The German Enlightenment philosopher Immanuel Kant held that immortality cannot be demonstrated by pure reason but must be accepted as an essential condition of morality. Holiness, the perfect accordance of the will with the moral law, demands endless progress only possible on the supposition of an endless duration of the existence and personality of the same rational being (which is called the immortality of the soul). Considerably less-sophisticated arguments both before and after Kant attempted to demonstrate the reality of an immortal soul by asserting that human beings would have no motivation to behave morally unless they believed in an eternal afterlife in which the good are rewarded and the evil are punished. A related argument held that denying an eternal afterlife of reward and punishment would lead to the repugnant conclusion that the universe is unjust.

In the late 19th century, the concept of immortality waned as a philosophical preoccupation, in part because of the secularization of philosophy under the growing influence of science.

More:

Immortality | philosophy and religion | Britannica.com

Immortality | Superpower Wiki | FANDOM powered by Wikia

Teitoku Kakine (A Certain Magical Index) achieved a form of immortality by creating a human tissues (and a new body) out of his Dark Matter.

Ladylee (A Certain Magical Index) is an immortal, in that when she grew weary of living, she sought to use powerful magic to kill her, which did not work.

Tenzen Yakushiji (Basilisk) having his symbiote "eat" away his wounds and restoring any ravages of time or battle, even reattaching his head by sealing the cut.

10 years after Tenzen's death, Joujin (Basilisk) gained the symbiote that was Tenzen's spirit, "eating" away any wounds aging the same way Tenzen's symbiote did.

Skull Knight (Berserk) is the mysterious 1,000 year old enemy of the God Hand and Apostles.

Nosferatu Zodd (Berserk), the 300 year old "God of the Battlefields and Combat".

Wyald (Berserk), the 100 year old leader of the Black Dog Knights.

Behelits (Berserk) are stone fetishes of unknown supernatural origin said to govern the fate of humanity. They are used primarily for summoning the angels of the God Hand, at which point their owners are granted a wish in exchange for a sacrifice.

Creed Diskenth (Black Cat) possesses the God's Breath nano-machines within his body, regenerating even fatal wounds in seconds and maintaining his youth, thus granting him immortality aside from any brain damage being irreparable.

Ssuke Aizen (Bleach) gained immortality after fusing with the Hgyoku.

C.C (Code Geass) is immortal.

V.V (Code Geass) is immortal.

Majin Buu (Dragon Ball series) is nigh-imposssible to kill unless every last molecule of his being is vaporized at once.

Due to the contradiction caused by the fusion of the absolutely immortal Zamasu and the mortal Goku Black, Fused Zamasu (Dragon Ball Super) has imperfect immortality.

Zeref (Fairy Tail) was cursed by Ankhseram with his contradiction curse which gives him uncontrollable Death Magic and Immortality.

Mavis Vermilion (Fairy Tail) was cursed with immortality after casting an incomplete Law spell.

Kager (Flame of Recca) using a forbidden spell that opens a time portal, but it traps her outside of space-time, rendering her completely immortal.

The Truth (Fullmetal Alchemist) is invincible, immortal and invulnerable.

Utsuro (Gintama) possesses immortality by harnessing the Altana energy of Earth to prevent aging and recover from wounds and diseases.

Kouka (Gintama) possessed immortality by harnessing the Altana energy of Kouan to prevent aging and recover from wounds and diseases. However, when she left the planet for good, she weakened overtime and died.

China (Hetalia) is the only nation stated to be truly immortal.

Dio Brando (JoJo's Bizarre Adventure) become a vampire and gain immortality by using the Stone Mask.

The Stone Mask (JoJo's Bizarre Adventure Parts I Phantom Blood and II Battle Tendency).

The Pillar Men (JoJo's Bizarre Adventure Part II Battle Tendency)

Through the unknown power of his Stand or since merging with DIO's flesh bud, Nijimura's Father (Jojo's Bizarre Adventure Part IV Diamonds Are Unbreakable) is effectively immortal and possess extraordinary healing capabilities.

Yta (Mermaid Saga) is a 500 years old immortal since unwittingly eating mermaid's flesh.

Mana (Mermaid Saga) is a 15 years old immortal since being fed mermaid's flesh.

Masato (Mermaid Saga) is an 800 years old immortal since eating mermaid's flesh.

Setsuna F. Seiei (Mobile Suit Gundam 00 The Movie - A wakening of the Trailblazer)

Ban, the Undead (Nanatsu no Taizai) acquired immortality after drinking the Fountain of Youth.

Meliodas (Nanatsu no Taizai) was cursed with the immortality by the Demon King.

Orochimaru (Naruto) considers himself immortal with his Living Corpse Reincarnation to transfer his soul to another body and his Cursed Seals as anchors of his conscious.

Hidan's (Naruto) main advantage is his inability to die by physical damage, though he is vulnerable to death by lack of nutrient.

Kakuzu (Naruto) attained a form of immortality (though he denies to think of it as such) by tearing hearts out of others and integrating them into himself, extending his lifespan. He kept five inside him at all times.

Madara Uchiha (Naruto) claims he has achieved complete immortality due to hosting the Shinju, as he regenerated form his torso being blown apart. Only when the tailed beasts were all pulled out of him did he die.

Kaguya tsutsuki (Naruto) is immortal, in that she has tremendous regenerative powers, and that the only way to defeat her is to seal her person away by splitting her chakra into the nine tailed beasts.

Gemma Himuro (Ninja Scroll) putting his severed body parts back together, even his head is possible, rendering him immortal.

Due to her race, Jibril (No Game No Life) has reached 6407 years of age, she also has incredibly vast knowledge and high magical abilities, in two words; she gathers many old and new knowledge, in other words; she can no longer age or die.

Yume Hasegawa (Pupa) is an immortal monster incarnated into human form, possessing regenerative abilities that rendered her very difficult to kill.

Utsutsu Hasegawa (Pupa) has been fed the flesh of her immortal "sister", giving him tremendous regenerative powers that made him more or less immortal.

Rin Asogi (RIN ~Daughters of Mnemosyne~) is immortal, due to a magic spore from Yggdrasil.

Free (Soul Eater) is a werewolf from the Immortal Clan, and therefore, immortal. He can only be harmed and killed by the "Witch-Hunt".

Koj Akatsuki (Strike the Blood) is revealed to be immortal, even by vampire standards after regenerating from complete decapitation.

Tta Konoe (UQ Holder) cannot regrow limbs unless they are completely destroyed, but otherwise is immortal and can reattach any of it, including his head.

Karin Yki (UQ Holder) has one of the highest ranked forms of immortality, stating that she's "not permitted to get hurt or die".

Elder Toguro (Yu Yu Hakusho) stated that his regenerative powers keep him from dying. This prevented him from dying from Kurama's torturous Sinning Tree.

Read more from the original source:

Immortality | Superpower Wiki | FANDOM powered by Wikia

Pros and Cons of Genetic Engineering – Conserve Energy Future

Genetic engineering is the process to alter the structure and nature of genes in human beings, animals or foods using techniques like molecular cloning and transformation. In other words, it is the process of adding or modifying DNA in an organism to bring about great deal of transformation.

Genetic engineering was thought to be a real problem just a few short years ago. We feared that soon we would be interfering with nature, trying to play God and cheat him out of his chance to decide whether we were blonde or dark haired, whether we had blue or bright green eyes or even how intelligent we were. The queries and concerns that we have regarding such an intriguing part of science are still alive and well, although they are less talked about nowadays than they were those few years ago.

However, this does not mean that they are any less relevant. In fact, they are as relevant today as they ever were. There are a number of very real and very troubling concerns surrounding genetic engineering, although there are also some very real benefits to further genetic engineering and genetic research, too. It seems, therefore, as though genetic engineering is both a blessing and a curse, as though we stand to benefit as well as lose from developing this area of science even further.

With genetic engineering, we will be able to increase the complexity of our DNA, and improve the human race. But it will be a slow process, because one will have to wait about 18 years to see the effect of changes to the genetic code.Stephen Hawking

Although at first the pros of genetic engineering may not be as apparent as the cons, upon further inspection, there are a number of benefits that we can only get if scientists consider to study and advance this particular branch of study. Here are just a few of the benefits:

1. Tackling and Defeating Diseases

Some of the most deadly and difficult diseases in the world, that have so resisted destruction, could be wiped out by the use of genetic engineering. There are a number of genetic mutations that humans can suffer from that will probably never be ended unless we actively intervene and genetically engineer the next generation to withstand these problems.

For instance, Cystic Fibrosis, a progressive and dangerous disease for which there is no known cure, could be completely cured with the help of selective genetic engineering.

2. Getting Rid of All Illnesses in Young and Unborn Children

There are very many problems that we can detect even before children are born. In the womb, doctors can tell whether your baby is going to suffer from sickle cell anemia, for instance, or from Down s syndrome. In fact, the date by which you can have an abortion has been pushed back relatively late just so that people can decide whether or not to abort a baby if it has one or more of these sorts of issues.

However, with genetic engineering, we would no longer have to worry. One of the main benefit of genetic engineering is that it can help cure and diseases and illness in unborn children. All children would be able to be born healthy and strong with no diseases or illnesses present at birth. Genetic engineering can also be used to help people who risk passing on terribly degenerative diseases to their children.

For instance, if you have Huntingtons there is a 50% chance that your children with inherit the disease and, even if they do not, they are likely to be carriers of the disease. You cannot simply stop people from having children if they suffer from a disease like this, therefore genetic engineering can help to ensure that their children live long and healthy lives from either the disease itself or from carrying the disease to pass on to younger generations.

3. Potential to Live Longer

Although humans are already living longer and longer in fact, our lifespan has shot up by a number of years in a very short amount of time because of the advances of modern medical science, genetic engineering could make our time on Earth even longer. There are specific, common illnesses and diseases that can take hold later in life and can end up killing us earlier than necessary.

With genetic engineering, on the other hand, we could reverse some of the most basic reasons for the bodys natural decline on a cellular level, drastically improving both the span of our lives and the quality of life later on. It could also help humans adapt to the growing problems of, for instance, global warming in the world.

If the places we live in become either a lot hotter or colder, we are going to need to adapt, but evolution takes many thousands of years, so genetic engineering can help us adapt quicker and better.

4. Produce New Foods

Genetic engineering is not just good for people. With genetic engineering we can design foods that are better able to withstand harsh temperatures such as the very hot or very cold, for instance and that are packed full of all the right nutrients that humans and animals need to survive. We may also be able to make our foods have a better medicinal value, thus introducing edible vaccines readily available to people all over the world

Perhaps more obvious than the pros of genetic engineering, there are a number of disadvantages to allowing scientists to break down barriers that perhaps are better left untouched. Here are just a few of those disadvantages:

1. Is it Right?

When genetic engineering first became possible, peoples first reactions were to immediately question whether it was right? Many religions believe that genetic engineering, after all, is tantamount to playing God, and expressly forbid that it is performed on their children, for instance.

Besides the religious arguments, however, there are a number of ethic objections. These diseases, after all, exist for a reason and have persisted throughout history for a reason. Whilst we should be fighting against them, we do need at least a few illnesses, otherwise we would soon become overpopulated. In fact, living longer is already causing social problems in the world today, so to artificially extend everybodys time on Earth might cause even more problems further down the line, problems that we cannot possibly predict.

2. May Lead to Genetic Defects

Another real problem with genetic engineering is the question about the safety of making changes at the cellular level. Scientists do not yet know absolutely everything about the way that the human body works (although they do, of course, have a very good idea). How can they possibly understand the ramifications of slight changes made at the smallest level?

What if we manage to wipe out one disease only to introduce something brand new and even more dangerous? Additionally, if scientists genetically engineer babies still in the womb, there is a very real and present danger that this could lead to complications, including miscarriage (early on), premature birth or even stillbirth, all of which are unthinkable.

The success rate of genetic experiments leaves a lot to be desired, after all. The human body is so complicated that scientists have to be able to predict what sort of affects their actions will have, and they simply cannot account for everything that could go wrong.

3. Limits Genetic Diversity

We need diversity in all species of animals. By genetically engineering our species, however, we will be having a detrimental effect on our genetic diversity in the same way as something like cloning would. Gene therapy is available only to the very rich and elite, which means that traits that tend to make people earn less money would eventually die out.

4. Can it Go Too Far?

One pressing question and issue with genetic engineering that has been around for years and years is whether it could end up going too far. There are many thousands of genetic scientists with honest intentions who want to bring an end to the worst diseases and illnesses of the current century and who are trying to do so by using genetic engineering.

However, what is to stop just a handful of people taking the research too far? What if we start demanding designer babies, children whose hair color, eye color, height and intelligence we ourselves dictate? What if we end up engineering the sex of the baby, for instance in China, where is it much more preferable to have a boy? Is that right? Is it fair? The problems with genetic engineering going too far are and ever present worry in a world in which genetic engineering is progressing further and further every day.

Genetic engineering is one of the topic that causes a lot of controversy. Altering the DNA of organisms has certainly raised a few eyebrows. It may work wonders but who knows if playing with the nature is really safe? Making yourself aware of all aspects of genetic engineering can help you to form your own opinion.

A true environmentalist by heart . Founded Conserve Energy Future with the sole motto of providing helpful information related to our rapidly depleting environment. Unless you strongly believe in Elon Musks idea of making Mars as another habitable planet, do remember that there really is no 'Planet B' in this whole universe.

See the rest here:

Pros and Cons of Genetic Engineering - Conserve Energy Future

Russia Already In, China to Come: Syria WW3 on the Cards …

With Russia already in Syria (getting help from Iran and Iraq), theres now a report China is sending in its troops too. A Syria WW3 scenario just got more likely. Will World War 3break out there?

just increased with the latest news that China is about to send military troops to team up with the Russians (who are getting help fromIran, Iraq and Syria itself with a joint information center) in defending the Syrian Army of President Bashar Al-Assad and targeting ISIS.The Iranians have been assisting Assad for a long time as part of their regional alliance through channels such as Hezbollah; then on September 21, 2015, there were reports ofRussia sending troops, jets and other military equipment to Syria. With the US, UK, Israel, France, Turkey and others already operating in Syria as a loose alliance on one side, and Russia, China, Iran, Iraq and the official Syrian Army involved on the other, the probability is getting higherall the time that the situation could erupt into a world conflict, and that we could potentially witness a Syria WW3 scenario.

China and Russia are joining forces militarily; they must be prepared for a Syria WW3 outbreak.

The following article from International Business Times stated:

After the Russians andIranians, Chinese troops reportedly are teaming up with the Syrian regime forces in what is being termed as thedeal that will allow PresidentBashar al-Assadto stay in power.A report claims that a Chinese naval vessel carrying dozens of military advisers is on its way to Syria and the Chinese troops will then join with the hundreds of Russian soldiers. The Chinese will be arriving in the coming weeks, a Syrian army official told Lebanon-based news website Al-Masdar Al-Arabi.The Chinese ship has crossed the Suez Canal in Egypt and is currently in theMediterranean Sea.However, according toIsraelimilitary news website DEBKAfile,the aircraft carrier, Liaoning-CV-16, is already docked at the Syrian port of Tartus, accompanied by a guided missile cruiser.

It is certainly disturbing to see so many of the worlds greatest military powers gathering in such a small country and actively operating there. Even if they say there are all mainly there to fight ISIS andare not directly targeting each other (yet), there is always the chance that hostilities could be triggered by accidental crossfire. However, to understand what is going on, we need to remember the grand NWO (New World Order) plan as it relates to geopolitics. Ever since the fall of the Soviet Union in 1991, the US has emerged as the worlds sole superpower. Since then, no matter who has been in power, its foreign policy has been driven by schools of thought like the Wolfowitz Doctrine and Brzezinskis The Grand Chessboard, which urge the need for the US to unashamedly dominate Eurasia (where most of the worlds resources lay). Even France is getting in on the action, absurdly claiming they are bombing Syria in self-defense!

Many sources inform us that Syria is a target of the Zionist-US-UK NWO.

Remember howGeneral Wesley Clark said that he was told around 10 days after the 9/11 false flag attack that the US plan was toattack 7 countries in 5 years (Iraq, Syria, Lebanon, Libya, Somalia, Sudan and Iran)? Remember how arch-manipulator and war criminal Henry Kissinger said that the events unfolding in Egypt were just the First Scene of the FirstActof a very long play?Remember how33 Freemasonic Grandmaster Albert Pike wrote a letter to GiuseppeMazzini in 1871, prophesizing 3 World Wars to come?Pikes WW3 prediction stated:

The Third World Warmust be fomented by taking advantage of the differences caused by the agentur of the Illuminati between the political Zionists and the leaders of Islamic World. The war must be conducted in such a way that Islam (the Moslem Arabic World) and political Zionism (the State of Israel) mutually destroy each other. Meanwhile the other nations, once more divided on this issue will be constrained to fight to the point of complete physical, moral, spiritual and economical exhaustion We shall unleash the Nihilists and the atheists, and we shall provoke a formidable social cataclysm which in all its horror will show clearly to the nations the effect of absolute atheism, origin of savagery and of the most bloody turmoil. Then everywhere, the citizens, obliged to defend themselves against the world minority of revolutionaries, will exterminate those destroyers of civilization, and the multitude, disillusioned with Christianity, whose deistic spirits will from that moment be without compass or direction, anxious for an ideal, but without knowing where to render its adoration, will receive the true light through the universal manifestation of the pure doctrine of Lucifer, brought finally out in the public view.

The creation of ISIS by the US may be because Syria has been such a hard nut to crack. But will they risk a Syria WW3 scenario?

The Zionist-US-UK axis was able to knock off countries like Iraq, Libya and Sudan with relative ease, but Syria has been a stumbling block on their road toworld domination, because it has powerful allies like Iran and Russia who have stepped in to defend it. We know thisZionist-US-UK axis has gone to great lengths to conquer Syria and to remove Assad from power, including creating and funding a hard-core group of religious militant terrorists, ISIS/IS/IL/Daesh. The ISIS psy-op has been thoroughly exposed, however, and it is now common knowledge that Israel and US are behind ISIS(see David Ickes humorous picture above).

The questions now remains: are these geopolitical manipulators crazy enough to trigger WW3 in Syria? Will they go to any length to conquer this small Middle Eastern nation?Will theyrisk a Syria WW3 outbreak? If what we know of their plans is correct, they have to first take out Syria in order to then take out Iran so that the Eurasian encircling of Russia and China is complete.Time will tell if a Syria WW3 scenario erupts, but disturbingly, as the days go by, the military buildup and therefore the likelihood keepincreasing.

*****

Want the latest commentaryand analysis on Conspiracy, Natural Health, Sovereignty, Consciousness and more? Sign up forfree blog updates!

Makia Freeman is the editor of alternative news / independent media siteThe Freedom Articlesand senior researcher atToolsForFreedom.com,writing on many aspects of truth and freedom, from exposing aspects of the global conspiracy to suggesting solutions for how humanity can create a new system of peace and abundance.

Sources:

http://www.rt.com/news/316592-russia-syria-islamic-state/

http://www.ibtimes.co.in/chinese-troops-join-russian-marines-syria-soon-says-report-648246

http://freedom-articles.toolsforfreedom.com/wolfowitz-doctrine-us-plan-global-supremacy/

http://www.rt.com/news/316163-france-bomb-syria-defense/

*https://www.youtube.com/watch?v=9RC1Mepk_Sw

http://www.prisonplanet.com/kissinger-on-egypt-unrest-this-is-only-the-first-scene-of-the-first-act-of-a-drama-that-is-to-be-played-out.html

http://freedom-articles.toolsforfreedom.com/timeline-to-ww3-will-ukraine-develop/

http://freedom-articles.toolsforfreedom.com/israel-controls-isis-proven-by-hack/

http://www.activistpost.com/2014/09/10-signs-that-isis-is-scripted-psyop.html

View original post here:

Russia Already In, China to Come: Syria WW3 on the Cards ...

Posted in Ww3

John McAfee Triggers Countdown to Unmask Bitcoin Creator …

Get Exclusive Analysis and Investing Ideas of Future Assets on Hacked.com. Join the community today and get up to $400 in discount by using the code: "CCN+Hacked". Sign up here. Get Exclusive Analysis and Investing Ideas of Future Assets on Hacked.com. Join the community today and get up to $400 in discount by using the code: "CCN+Hacked". Sign up here.

By CCN: Taking a breather from making $1 million Bitcoin forecasts, it appears that John McAfee has a new narrative to pursue. Namely, the polarizing tech entrepreneur is aiming for the greatest mystery in all of Cryptocurrency, Who is Satoshi Nakamoto?

The shadowy BTC creators anonymity has created a thriving market for those people claiming to be the architect of that famous whitepaper. Now it appears that Craig Wright and the gang are going to be redundant shortly because McAfee is getting frustrated and wants the truth out in the open. A series of extraordinary tweets ensued from the original, so get some popcorn ready.

Does McAfee know who Satoshi is? The smart money is almost certainly on no, but as ever with John, his background, charisma, and following will still make this new narrative unmissable viewing. Evidently, some people are concerned about his outing of Satoshi but, at this stage, we dont have any proof or evidence to support that Mr. McAfee knows anything. I take that back, another tweet.

Conspiracy theories are so powerful because they are typically an exciting story electrified with uncertainty and speculation. The Bitcoin origin story is on another level because we have a project of staggering success for which no-one can officially take credit.

For me, the story centers around Nakamotos wallet. While the hundreds of thousands of BTC coins remain there untouched and unmoving, this storyline is on ice. If and when we see some action in the wallet, then this would be the most definite way for the founder to communicate with the community. The fact this hasnt happened yet is fascinating. Is it fear or restraint? Maybe they no longer have access or are even dead. Alternatively, as a giant holder of Bitcoin could it be an attempt to restrict supply in some way?

John McAfee believes the time is nigh for Satoshi Nakamoto to reveal him/her/them/its-self . | Source: Flickr

What we do know is that John McAfee can leverage this uncertainty for tremendous attention ahead of his likely presidential run. Lets enjoy a flight of fancy and assume he does know whats going on. These tweets are going to have to go somewhere. McAfee has set the countdown here and has narrowed it down to Satoshi Nakamotos country of residence.

Sadly, there is an evident back-door exit for J Mac in all of this. He can merely state that hes changed his mind and it would be too dangerous or damaging to reveal Bitcoins founder. That would be boring but is what we all must expect at the end of this. What we have learned is that its not worth contacting Mr. McAfee and pretending to be Satoshi because he can sniff you out from a mile away.

True or not, its going to be very entertaining to watch this play out. I hope it turns out to be an Alien AI. A regular old homo sapiens would be such a let-down.

Read the original:

John McAfee Triggers Countdown to Unmask Bitcoin Creator ...

John McAfee – IMDb

Edit Storyline

Antivirus pioneer John McAfee who found himself at the centre of Central America's hottest manhunt in recent history. McAfee was named a "person of interest," but not a suspect, by police officials in Belize following the murder of his neighbor. McAfee went on the run for three weeks before crossing the border to Guatemala. Guatemalan authorities detained him soon after entry, and fighting against deportation to Belize. Has the security software guru become a gun-toting, paranoid killer who says strange things and bends reality? Is he a genius that has just saved America, while exposing one of the biggest scandals of all time? Or, has he just acted out the greatest mindgame of all time? Running in the Background (Working Title) It is the only official account, based John McAfee's personal diary, of the life and times of the man himself -starting from his childhood to running from his life in the most televised manhunt in history while revealing all unanswered questions and secrets. Written byImpact Future Media

Read more here:

John McAfee - IMDb

John McAfee ‘knows true identity of Bitcoin creator Satoshi …

Tech mogul John McAfee is famed for firing guns, guzzling drugs, fleeing a murder charge and trying to have sex with a whale.

But could he soon go down in history as the man who finally exposed the true identity of Satoshi Nakamoto, inventor of Bitcoin?

The wild man of the tech industry has threatened to unmask the cryptocurrencys shadowy creator, vowing to tell the world exactly who is behind Bitcoin unless they voluntarily out themselves.

In a series of tweets, British-born renegade McAfee claimed one person who is currently living in the US wrote the white paper which set out the cryptocurrencys rules, but a team then built and developed it.

McAfee said that everybody on this team was from India.

The identity of Satoshi has remained a mystery for more than a decade. The person, or people, who use the pseudonym are thought to be sitting on a cryptocurrency stash worth billions of pounds.

McAfee wrote: The Who is Satoshi? mystery must end!

First: It is NOT the CIA nor any agency of any world government. It IS a collection of people, but the white paper was written by one man, who currently resides in the US.

If he does not come forward these narrowings will continue.

He added: I protected the identity of Satoshi. Its time, though, that this be put to bed. Imposters claim to be him, we are spending time and energy in search of him Its a waste. Every day I will narrow down the identity of Satoshi until he reveals himself, or I reveal him.

For a sense of McAfees colourful life, you just have to look at his Twitter feed. Weve put a few of our favourites below, which shows the tech legend discussing an abortive attempt to have sex with a whale as well as his love of sex, guns, drink and drugs.

McAfee was a programmer at Nasa in the late 1960s before launching his anti-virus company in the 1980s and going on to sell it for roughly $100million, reportedly spending his fortune on nine luxurious homes, multiple planes, vintage cars, expensive art and even a dinosaur skull.

But in 2009 he allegedly lost money in the financial crisis before moving to Belize, where he lived surrounded by young women and protected by armed bodyguards.

His home was raided in 2012 by the Belize Police Gang Suppression Unit because he was suspected of cooking meth, but no drugs were found and he was never charged.

McAfee believes the raid was part of a conspiracy to destroy him because he refused to be extorted by the Belize government.

He went on to become an international fugitive in 2012, fleeing from Belize police over the murder of his neighbour.

Four years later he ran for President of the United States as a representative for the Libertarian party and achieved its best ever result.

Read this article:

John McAfee 'knows true identity of Bitcoin creator Satoshi ...

Will McAfee Disclose Nakamoto’s Identity? Crypto Will Suffer …

In a series of intriguing tweets, John McAfee dropped a bombshell yesterday by claiming knowledge of Satoshi Nakamotos identity. McAfee intends to end speculation on Nakamotos identity once and for all, which he feels is necessary for the crypto space to move forward. He goes on to say he will divulge information until Nakamoto comes forward. Failing that, McAfee himself will disclose Nakamotos identity.

John McAfees outing of Satoshi Nakamoto has reignited speculation on cryptos greatest mystery. At present, the only widely known report pertains to him being an unknown person, or group, who developed Bitcoin. According to a profile on P2P Foundation, which is an organization studying peer-to-peer technology, Nakamoto claims to be a Japanese resident born on the 5th April 1975.

However, given his perfect English, with the use of colloquialisms, some speculate he is not Japanese. Or in the very least, a member of his team originates from the British Commonwealth.

Following the timeline of his last known actions, he continued to contribute to the coding of Bitcoin until mid-2010. It was then Gavin Andresen received control of the source code repository and network alert key. After that, Nakamotos involvement with Bitcoin ended.

The mysterious circumstances of his sudden disappearance have only added to Bitcoins allegorical notoriety. But, this hasnt stopped speculation on several names including Nick Szabo, Craig Wright, Dorian Nakamoto, and Hal Finney. With some even claiming Bitcoin is a US Intelligence project.

According to McAfee, the secret of who Nakamoto is perpetuates a pointless exercise. To which, the entire crypto industry would benefit from knowing his identity. In one of his tweets, he said:

I protected the identity of Satoshi. Its time, though, that this be put to bed. Imposters claim to be him, we are spending time and energy in search of him Its a waste. Every day I will narrow down the identity of Satoshi until he reveals himself, or I reveal him.

However, given McAfees past form for sensationalism, its fair to question whether he knows Nakamotos true identity. But, at the same time, McAfees cypherpunk credentials do stack up, giving plausibility to his claims. Especially so, considering his active involvement in the industry during Bitcoins formative years.

Whether McAfee will out Nakamoto, or not, this much is clear, Nakamoto wishes to remain anonymous. According to Alex Lielacher, his motives for anonymity are based on safeguarding the Bitcoin project and allowing it to operate on its own merit. He wrote:

it is arguable that he remained anonymous in order to avoid the possibility of him becoming the de facto leader of the system and, thereby, having people place their trust in him as the creator as opposed to the ledger. Moreover, any announcement by Satoshi would likely be regarded as investment advice by those who held the digital currency and may have resulted in price movements.

Image courtesy of CoinSutra.

Read more:

Will McAfee Disclose Nakamoto's Identity? Crypto Will Suffer ...

World War-D: War on drugs failure – Roadmap to legalization

Once in a long while comes a book that changes the way we look at a particular issue.

World War-D aspires to be such a book; it changes the way we think about the war on drugs, pulling it out of the ideological and moralist morass where it has been enmeshed from the onset, turning things on their heads or I should say, back on their feet.

World War-D re-centers and refocuses the issue around a simple but fundamental question: Can organized societies do a better job than organized crime at managing and controlling psychoactive substances? I obviously think they can, and I explain why and how.

World War-D is the first book to bring one of the most contentious issues of our time to the mainstream in a comprehensive but accessible way, without being simplistic. It examines all the facets of the issue from a global perspective, repositioning it into the wider and more relevant context of psychoactive substances.

Word War-D offers a reasoned critic of the prohibitionist model and its underlying ideology with its historical and cultural background. It clearly demonstrates that prohibition is the worst possible form of control, as the so-called controlled substances are effectively out of control; or rather, they are controlled by the underworld, at a staggering and ever-growing human, social, economic and geopolitical cost to the world.

Word War-D is the first book to tackle the issue of legalization head-front, offering a pragmatic, practical, and realistic roadmap to global controlled re-legalization of production, distribution and use of psychoactive substances under a multi-tiers legalize, tax, control, prevent, treat and educate regime with practical and efficient mechanisms to manage and minimize societal costs. Far from giving up, and far from an endorsement, controlled legalization would be finally growing up; being realistic instead of being in denial; being in control instead of leaving control to the underworld. It would abolish the current regime of socialization of costs and privatization of profits to criminal enterprises, depriving them of their main source of income and making our world a safer place.

102 years after the launch of global drug prohibition, 40 years after the official declaration of the war on drugs, one year before the Mexican and US presidential elections where the legalization debate will be one of the major issues, World War D is timely and long overdue, as its topic is rapidly moving from fringe lunacy to the mainstream. A growing wave of support for drug policy reform is rising throughout the world; the war on drugs failure is being denounced across the board, from church groups to retired law enforcement, to the NAACP, to Kofi Annan, George Shultz, Paul Volcker and a string of former Latin American and European heads of state.

The book is intended for an international audience and aims to be a major contribution to the war on drugs debate.

Gustavo de Grieff was Attorney General of Colombia and oversaw the capture of Pablo Escobar and the surrender of the Cali Cartel;Gustavo de Grieffis one of the very few high level officials whocalled for legalization while he was in office:

I find that you have written one of the best books on the drug problem that I have read (and I have read more than thirty books on that subject). For example, your history of prohibition in part 1 is without any doubt the best I have ever read.your chapters on possible legalization and regulation and on your counter arguments against it are excellent and I subscribe to them entirely.

LEAPfounder and Chairman,Jack Cole:

It is a very good read and already I can say a very important work. You did a fantastic job. It is up there with the very best drug policy books.

Arthur Torsone, author ofHerb Trader:

I believe your book will be extremely helpful to those who have the power to reverse the existing draconian drug laws. Hopefully your book will be a road map to a sane conclusion.

When the rulers of our land eventuality exchange prisons for medical clinics the bible hand book that will be used to EDUCATE the citizen in need of help should be your book. It shows how and why we humans react as we do to outside substances.

Im still blown away by the incredible amount of detailed information you have, what an extraordinary work of literature you have here, congratulations.

Santiago Roel, Crime Prevention consultant pioneering government reform in Mexico since 1991. Author, lecturer http://www.prominix.com:

It is a thorough and well-documented compilation, a global overview of all the issues revolving around the war on drugs, prohibitionism and psychoactive substances. It offers a methodical, well-argued and compelling case against prohibitionism and a realistic and pragmatic roadmap to global legalization. Anyone genuinely interested in understanding this failed war and its negative impact on the World should begin by reading this book.

John P., typesetter, while working on book layout:

I am fairly amazed by the content, as I read pieces; this is impressive. There is nothing out there like that.

While working on my project back in November 2010, I established contact with formerUNODCchief Antonio Maria Costa. Underneath are some of Mr Costas replies to my correspondence:

I just do not get all this insistence on war on drugs. I never used this term. The United Nations never used this term. I fear it is being used to mask other objectives. Drugs were banned by member states because they are dangerous, they are not dangerous because they are banned.

If you believe that some sort of (whatever form of) legalization of drugs would be the correct answer well, I am afraid this would be dangerously naive. In other words, if this is the answer you would like to receive, I must conclude that the set of issues you raised are a bit more complicated than you seem to realize.

When I asked him for his reaction to the Global Drug Policy Commission, counting among its members Kofi Annan, who was UN Secretary General while Mr Costa was UNODC Director:

The only common denominator among them is former. What is wrong with people who, when in office say one thing, when out of office say its opposite?

I try in World War-D to understand where such attitudes come from, how we got where we are, how we are still there after so many years of hopeless failure, how we can accelerate the move beyond such attitudes.

See the original post here:

World War-D: War on drugs failure - Roadmap to legalization

Human spaceflight – Wikipedia

Human spaceflight (also referred to as crewed spaceflight or manned spaceflight) is space travel with a crew or passengers aboard the spacecraft. Spacecraft carrying people may be operated directly, by human crew, or it may be either remotely operated from ground stations on Earth or be autonomous, able to carry out a specific mission with no human involvement.

The first human spaceflight was launched by the Soviet Union on 12 April 1961 as a part of the Vostok program, with cosmonaut Yuri Gagarin aboard. Humans have been continuously present in space for 18years and 168days on the International Space Station. All early human spaceflight was crewed, where at least some of the passengers acted to carry out tasks of piloting or operating the spacecraft. After 2015, several human-capable spacecraft are being explicitly designed with the ability to operate autonomously.

Russia and China have human spaceflight capability with the Soyuz program and Shenzhou program. In the United States, SpaceShipTwo reached the edge of space in 2018; this was the first crewed spaceflight from the USA since the Space Shuttle retired in 2011. Currently, all expeditions to the International Space Station use Soyuz vehicles, which remain attached to the station to allow quick return if needed. The United States is developing commercial crew transportation to facilitate domestic access to ISS and low Earth orbit, as well as the Orion vehicle for beyond-low Earth orbit applications.

While spaceflight has typically been a government-directed activity, commercial spaceflight has gradually been taking on a greater role. The first private human spaceflight took place on 21 June 2004, when SpaceShipOne conducted a suborbital flight, and a number of non-governmental companies have been working to develop a space tourism industry. NASA has also played a role to stimulate private spaceflight through programs such as Commercial Orbital Transportation Services (COTS) and Commercial Crew Development (CCDev). With its 2011 budget proposals released in 2010,[1] the Obama administration moved towards a model where commercial companies would supply NASA with transportation services of both people and cargo transport to low Earth orbit. The vehicles used for these services could then serve both NASA and potential commercial customers. Commercial resupply of ISS began two years after the retirement of the Shuttle, and commercial crew launches could begin by 2019.[2]

Human spaceflight capability was first developed during the Cold War between the United States and the Soviet Union (USSR), which developed the first intercontinental ballistic missile rockets to deliver nuclear weapons. These rockets were large enough to be adapted to carry the first artificial satellites into low Earth orbit. After the first satellites were launched in 1957 and 1958, the US worked on Project Mercury to launch men singly into orbit, while the USSR secretly pursued the Vostok program to accomplish the same thing. The USSR launched the first human in space, Yuri Gagarin, into a single orbit in Vostok 1 on a Vostok 3KA rocket, on 12 April 1961. The US launched its first astronaut, Alan Shepard, on a suborbital flight aboard Freedom 7 on a Mercury-Redstone rocket, on 5 May 1961. Unlike Gagarin, Shepard manually controlled his spacecraft's attitude, and landed inside it. The first American in orbit was John Glenn aboard Friendship 7, launched 20 February 1962 on a Mercury-Atlas rocket. The USSR launched five more cosmonauts in Vostok capsules, including the first woman in space, Valentina Tereshkova aboard Vostok 6 on 16 June 1963. The US launched a total of two astronauts in suborbital flight and four into orbit through 1963.

US President John F. Kennedy raised the stakes of the Space Race by setting the goal of landing a man on the Moon and returning him safely by the end of the 1960s.[3] The US started the three-man Apollo program in 1961 to accomplish this, launched by the Saturn family of launch vehicles, and the interim two-man Project Gemini in 1962, which flew 10 missions launched by Titan II rockets in 1965 and 1966. Gemini's objective was to support Apollo by developing American orbital spaceflight experience and techniques to be used in the Moon mission.[4]

Meanwhile, the USSR remained silent about their intentions to send humans to the Moon, and proceeded to stretch the limits of their single-pilot Vostok capsule into a two- or three-person Voskhod capsule to compete with Gemini. They were able to launch two orbital flights in 1964 and 1965 and achieved the first spacewalk, made by Alexei Leonov on Voskhod 2 on 8 March 1965. But Voskhod did not have Gemini's capability to maneuver in orbit, and the program was terminated. The US Gemini flights did not accomplish the first spacewalk, but overcame the early Soviet lead by performing several spacewalks and solving the problem of astronaut fatigue caused by overcoming the lack of gravity, demonstrating up to two weeks endurance in a human spaceflight, and the first space rendezvous and dockings of spacecraft.

The US succeeded in developing the Saturn V rocket necessary to send the Apollo spacecraft to the Moon, and sent Frank Borman, James Lovell, and William Anders into 10 orbits around the Moon in Apollo 8 in December 1968. In July 1969, Apollo 11 accomplished Kennedy's goal by landing Neil Armstrong and Buzz Aldrin on the Moon 21 July and returning them safely on 24 July along with Command Module pilot Michael Collins. A total of six Apollo missions landed 12 men to walk on the Moon through 1972, half of which drove electric powered vehicles on the surface. The crew of Apollo 13, Lovell, Jack Swigert, and Fred Haise, survived a catastrophic in-flight spacecraft failure and returned to Earth safely without landing on the Moon.

Meanwhile, the USSR secretly pursued human lunar orbiting and landing programs. They successfully developed the three-person Soyuz spacecraft for use in the lunar programs, but failed to develop the N1 rocket necessary for a human landing, and discontinued the lunar programs in 1974.[5] On losing the Moon race, they concentrated on the development of space stations, using the Soyuz as a ferry to take cosmonauts to and from the stations. They started with a series of Salyut sortie stations from 1971 to 1986.

After the Apollo program, the US launched the Skylab sortie space station in 1973, manning it for 171 days with three crews aboard Apollo spacecraft. President Richard Nixon and Soviet Premier Leonid Brezhnev negotiated an easing of relations known as dtente, an easing of Cold War tensions. As part of this, they negotiated the Apollo-Soyuz Test Project, in which an Apollo spacecraft carrying a special docking adapter module rendezvoused and docked with Soyuz 19 in 1975. The American and Russian crews shook hands in space, but the purpose of the flight was purely diplomatic and symbolic.

Nixon appointed his Vice President Spiro Agnew to head a Space Task Group in 1969 to recommend follow-on human spaceflight programs after Apollo. The group proposed an ambitious Space Transportation System based on a reusable Space Shuttle which consisted of a winged, internally fueled orbiter stage burning liquid hydrogen, launched by a similar, but larger kerosene-fueled booster stage, each equipped with airbreathing jet engines for powered return to a runway at the Kennedy Space Center launch site. Other components of the system included a permanent modular space station, reusable space tug and nuclear interplanetary ferry, leading to a human expedition to Mars as early as 1986, or as late as 2000, depending on the level of funding allocated. However, Nixon knew the American political climate would not support Congressional funding for such an ambition, and killed proposals for all but the Shuttle, possibly to be followed by the space station. Plans for the Shuttle were scaled back to reduce development risk, cost, and time, replacing the piloted flyback booster with two reusable solid rocket boosters, and the smaller orbiter would use an expendable external propellant tank to feed its hydrogen-fueled main engines. The orbiter would have to make unpowered landings.

The two nations continued to compete rather than cooperate in space, as the US turned to developing the Space Shuttle and planning the space station, dubbed Freedom. The USSR launched three Almaz military sortie stations from 1973 to 1977, disguised as Salyuts. They followed Salyut with the development of Mir, the first modular, semi-permanent space station, the construction of which took place from 1986 to 1996. Mir orbited at an altitude of 354 kilometers (191 nautical miles), at a 51.6 inclination. It was occupied for 4,592 days, and made a controlled reentry in 2001.

The Space Shuttle started flying in 1981, but the US Congress failed to approve sufficient funds to make Freedom a reality. A fleet of four shuttles was built: Columbia, Challenger, Discovery, and Atlantis. A fifth shuttle, Endeavour, was built to replace Challenger, which was destroyed in an accident during launch that killed 7 astronauts on 28 January 1986. Twenty-two Shuttle flights carried a European Space Agency sortie space station called Spacelab in the payload bay from 1983 to 1998.[6]

The USSR copied the reusable Space Shuttle orbiter, which it called Buran. It was designed to be launched into orbit by the expendable Energia rocket, and capable of robotic orbital flight and landing. Unlike the US Shuttle, Buran had no main rocket engines, but like the Shuttle used its orbital maneuvering engines to perform its final orbital insertion. A single unmanned orbital test flight was successfully made in November 1988. A second test flight was planned by 1993, but the program was cancelled due to lack of funding and the dissolution of the Soviet Union in 1991. Two more orbiters were never completed, and the first one was destroyed in a hangar roof collapse in May 2002.

The dissolution of the Soviet Union in 1991 brought an end to the Cold War and opened the door to true cooperation between the US and Russia. The Soviet Soyuz and Mir programs were taken over by the Russian Federal Space Agency, now known as the Roscosmos State Corporation. The Shuttle-Mir Program included American Space Shuttles visiting the Mir space station, Russian cosmonauts flying on the Shuttle, and an American astronaut flying aboard a Soyuz spacecraft for long-duration expeditions aboard Mir.

In 1993, President Bill Clinton secured Russia's cooperation in converting the planned Space Station Freedom into the International Space Station (ISS). Construction of the station began in 1998. The station orbits at an altitude of 409 kilometers (221nmi) and an inclination of 51.65.

The Space Shuttle was retired in 2011 after 135 orbital flights, several of which helped assemble, supply, and crew the ISS. Columbia was destroyed in another accident during reentry, which killed 7 astronauts on 1 February 2003.

After Russia's launch of Sputnik 1 in 1957, Chairman Mao Zedong intended to place a Chinese satellite in orbit by 1959 to celebrate the 10th anniversary of the founding of the People's Republic of China (PRC),[7] However, China did not successfully launch its first satellite until 24 April 1970. Mao and Premier Zhou Enlai decided on 14 July 1967, that the PRC should not be left behind, and started China's own human spaceflight program.[8] The first attempt, the Shuguang spacecraft copied from the US Gemini, was cancelled on 13 May 1972.

China later designed the Shenzhou spacecraft resembling the Russian Soyuz, and became the third nation to achieve independent human spaceflight capability by launching Yang Liwei on a 21-hour flight aboard Shenzhou 5 on 15 October 2003. China launched the Tiangong-1 space station on 29 September 2011, and two sortie missions to it: Shenzhou 9 1629 June 2012, with China's first female astronaut Liu Yang; and Shenzhou 10, 1326 June 2013. The station was retired on 21 March 2016 and remains in a 363-kilometer (196-nautical-mile), 42.77 inclination orbit.

The European Space Agency began development in 1987 of the Hermes spaceplane, to be launched on the Ariane 5 expendable launch vehicle. The project was cancelled in 1992, when it became clear that neither cost nor performance goals could be achieved. No Hermes shuttles were ever built.

Japan began development in the 1980s of the HOPE-X experimental spaceplane, to be launched on its H-IIA expendable launch vehicle. A string of failures in 1998 led to funding reduction, and the project's cancellation in 2003.

Under the Bush administration, the Constellation Program included plans for retiring the Shuttle program and replacing it with the capability for spaceflight beyond low Earth orbit. In the 2011 United States federal budget, the Obama administration cancelled Constellation for being over budget and behind schedule while not innovating and investing in critical new technologies.[9] For beyond low Earth orbit human spaceflight NASA is developing the Orion spacecraft to be launched by the Space Launch System. Under the Commercial Crew Development plan, NASA will rely on transportation services provided by the private sector to reach low Earth orbit, such as SpaceX's Falcon 9/Dragon V2, Sierra Nevada Corporation's Dream Chaser, or Boeing's CST-100. The period between the retirement of the shuttle in 2011 and the first launch to space of Spaceshiptwo on December 13, 2018 is similar to the gap between the end of Apollo in 1975 and the first Space Shuttle flight in 1981, is referred to by a presidential Blue Ribbon Committee as the U.S. human spaceflight gap.[10]

Since the early 2000s, a variety of private spaceflight ventures have been undertaken. Several of the companies, including Blue Origin, SpaceX, Virgin Galactic, and Sierra Nevada have explicit plans to advance human spaceflight. As of 2016[update], all four of those companies have development programs underway to fly commercial passengers.

A commercial suborbital spacecraft aimed at the space tourism market is being developed by Virgin Galactic called SpaceshipTwo which reached space in December 2018.[11][12]Blue Origin has begun a multi-year test program of their New Shepard vehicle and carried out six successful uncrewed test flights in 20152016. Blue Origin plan to fly "test passengers" in Q2 2017, and initiate commercial flights in 2018.[13][14]

SpaceX and Boeing are both developing passenger-capable orbital space capsules as of 2015, planning to fly NASA astronauts to the International Space Station by 2019. SpaceX will be carrying passengers on Dragon 2 launched on a Falcon 9 launch vehicle. Boeing will be doing it with their CST-100 launched on a United Launch Alliance Atlas V launch vehicle.[15]Development funding for these orbital-capable technologies has been provided by a mix of government and private funds, with SpaceX providing a greater portion of total development funding for this human-carrying capability from private investment.[16][17]There have been no public announcements of commercial offerings for orbital flights from either company, although both companies are planning some flights with their own private, not NASA, astronauts on board.

12 April 1961

Yuri Gagarin became the first Russian as well as the first human to reach space on Vostok 1 on April 12, 1961.

Sally Ride became the first American woman in space in 1983. Eileen Collins was the first female Shuttle pilot, and with Shuttle mission STS-93 in 1999 she became the first woman to command a U.S. spacecraft.

For many years, only the USSR (later Russia) and the United States had their own astronauts. Citizens of other nations flew in space, beginning with the flight of Vladimir Remek, a Czech, on a Soviet spacecraft on 2 March 1978, in the Interkosmos programme. As of 2010[update], citizens from 38 nations (including space tourists) have flown in space aboard Soviet, American, Russian, and Chinese spacecraft.

Human spaceflight programs have been conducted by the former Soviet Union and current Russian Federation, the United States, the People's Republic of China and by private spaceflight company Scaled Composites.

Currently have human spaceflight programs.

Confirmed and dated plans for human spaceflight programs.

Plans for human spaceflight on the simplest form (suborbital spaceflight, etc.).

Plans for human spaceflight on the extreme form (space stations, etc.).

Once had official plans for human spaceflight programs, but have since been abandoned.

Space vehicles are spacecraft used for transportation between the Earth's surface and outer space, or between locations in outer space. The following space vehicles and spaceports are currently used for launching human spaceflights:

The following space stations are currently maintained in Earth orbit for human occupation:

Numerous private companies attempted human spaceflight programs in an effort to win the $10 million Ansari X Prize. The first private human spaceflight took place on 21 June 2004, when SpaceShipOne conducted a suborbital flight. SpaceShipOne captured the prize on 4 October 2004, when it accomplished two consecutive flights within one week. SpaceShipTwo, launching from the carrier aircraft White Knight Two, is planned to conduct regular suborbital space tourism.[18]

Most of the time, the only humans in space are those aboard the ISS, whose crew of six spends up to six months at a time in low Earth orbit.

NASA and ESA use the term "human spaceflight" to refer to their programs of launching people into space. These endeavors have also been referred to as "manned space missions," though because of gender specificity this is no longer official parlance according to NASA style guides.[19]

On 15th August, 2018 Prime Minister of India Narendra Modi, from rampant of the Red Fort Formally announced Indian Human Spaceflight Programme. Through this Programme, India is planning to send humans into the space on its orbital vehicle Gaganyaan by the end of 2021. The Indian Space Research Organisation (ISRO) began work on this project in 2006.[20] The objective is to carry a crew of three to low Earth orbit (LEO) and return them safely for a water-landing at a predefined landing zone. The program is proposed to be implemented in defined phases. Currently, the activities are progressing with a focus on the development of critical technologies for subsystems such as the Crew Module (CM), Environmental Control and Life Support System (ECLSS), Crew Escape System, etc. The department has initiated activities to study technical and managerial issues related to crewed missions. The program envisages the development of a fully autonomous orbital vehicle carrying 2 or 3 crew members to about 300km low Earth orbit and their safe return.

NASA is developing a plan to land humans on Mars by the 2030s. The first step in this mission begins sometime during 2020, when NASA plans to send an uncrewed craft into deep space to retrieve an asteroid.[21] The asteroid will be pushed into the moons orbit, and studied by astronauts aboard Orion, NASAs first human spacecraft in a generation.[22] Orions crew will return to Earth with samples of the asteroid and their collected data. In addition to broadening Americas space capabilities, this mission will test newly developed technology, such as solar electric propulsion, which uses solar arrays for energy and requires ten times less propellant than the conventional chemical counterpart used for powering space shuttles to orbit.[23]

Several other countries and space agencies have announced and begun human spaceflight programs by their own technology, Japan (JAXA), Iran (ISA) and Malaysia (MNSA).

A number of spacecraft have been proposed over the decades that might facilitate spaceliner passenger travel. Somewhat analogous to travel by airliner after the middle of the 20th century, these vehicles are proposed to transport a large number of passengers to destinations in space, or to destinations on Earth which travel through space. To date, none of these concepts have been built, although a few vehicles that carry fewer than 10 persons are currently in the flight testing phase of their development process.

One large spaceliner concept currently in early development is the SpaceX BFR which, in addition to replacing the Falcon 9 and Falcon Heavy launch vehicles in the legacy Earth-orbit market after 2020, has been proposed by SpaceX for long-distance commercial travel on Earth. This is to transport people on point-to-point suborbital flights between two points on Earth in under one hour, also known as "Earth-to-Earth," and carrying 100+ passengers.[24][25][26]

Small spaceplane or small capsule suborbital spacecraft have been under development for the past decade or so and, as of 2017[update], at least one of each type are under development. Both Virgin Galactic and Blue Origin are in active development, with the SpaceShipTwo spaceplane and the New Shepard capsule, respectively. Both would carry approximately a half-dozen passengers up to space for a brief time of zero gravity before returning to the same location from where the trip began. XCOR Aerospace had been developing the Lynx single-passenger spaceplane since the 2000s[27][28][29] but development was halted in 2017.[30]

There are two main sources of hazard in space flight: those due to the environment of space which make it hostile to the human body, and the potential for mechanical malfunctions of the equipment required to accomplish space flight.

Planners of human spaceflight missions face a number of safety concerns.

The immediate needs for breathable air and drinkable water are addressed by the life support system of the spacecraft.

Medical consequences such as possible blindness and bone loss have been associated with human space flight.[38][39]

On 31 December 2012, a NASA-supported study reported that spaceflight may harm the brain of astronauts and accelerate the onset of Alzheimer's disease.[40][41][42]

In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.[43][44]

On 2 November 2017, scientists reported that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes.[45][46]

Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts.[47][48]

In March 2019, NASA reported that latent viruses in humans may be activated during space missions, adding possibly more risk to astronauts in future deep-space missions.[49]

Medical data from astronauts in low Earth orbits for long periods, dating back to the 1970s, show several adverse effects of a microgravity environment: loss of bone density, decreased muscle strength and endurance, postural instability, and reductions in aerobic capacity. Over time these deconditioning effects can impair astronauts performance or increase their risk of injury.[50]

In a weightless environment, astronauts put almost no weight on the back muscles or leg muscles used for standing up, which causes them to weaken and get smaller. Astronauts can lose up to twenty per cent of their muscle mass on spaceflights lasting five to eleven days. The consequent loss of strength could be a serious problem in case of a landing emergency.[51] Upon return to Earth from long-duration flights, astronauts are considerably weakened, and are not allowed to drive a car for twenty-one days.[52]

Astronauts experiencing weightlessness will often lose their orientation, get motion sickness, and lose their sense of direction as their bodies try to get used to a weightless environment. When they get back to Earth, or any other mass with gravity, they have to readjust to the gravity and may have problems standing up, focusing their gaze, walking and turning. Importantly, those body motor disturbances after changing from different gravities only get worse the longer the exposure to little gravity.[53] These changes will affect operational activities including approach and landing, docking, remote manipulation, and emergencies that may happen while landing. This can be a major roadblock to mission success.[citation needed]

In addition, after long space flight missions, male astronauts may experience severe eyesight problems.[54][55][56][57][58] Such eyesight problems may be a major concern for future deep space flight missions, including a crewed mission to the planet Mars.[54][55][56][57][59]

Without proper shielding, the crews of missions beyond low Earth orbit (LEO) might be at risk from high-energy protons emitted by solar flares and associated solar particle events (SPEs). Lawrence Townsend of the University of Tennessee and others have studied the overall most powerful solar storm ever recorded. The flare was seen by the British astronomer Richard Carrington in September 1859. Radiation doses astronauts would receive from a Carrington-type storm could cause acute radiation sickness and possibly even death.[61] Another storm that could have incurred a lethal radiation dose if astronauts were outside the Earth's protective magnetosphere occurred during the Space Age, in fact, shortly after Apollo 16 landed and before Apollo 17 launched.[62] This solar storm of August 1972 would likely at least have caused acute illness.[63]

Another type of radiation, galactic cosmic rays, presents further challenges to human spaceflight beyond low Earth orbit.[64]

There is also some scientific concern that extended spaceflight might slow down the bodys ability to protect itself against diseases.[65] Some of the problems are a weakened immune system and the activation of dormant viruses in the body. Radiation can cause both short and long term consequences to the bone marrow stem cells which create the blood and immune systems. Because the interior of a spacecraft is so small, a weakened immune system and more active viruses in the body can lead to a fast spread of infection.[citation needed]

During long missions, astronauts are isolated and confined into small spaces. Depression, cabin fever and other psychological problems may impact the crew's safety and mission success.[66]

Astronauts may not be able to quickly return to Earth or receive medical supplies, equipment or personnel if a medical emergency occurs. The astronauts may have to rely for long periods on their limited existing resources and medical advice from the ground.

During astronauts' stay in space, they may experience mental disorders (such as post-trauma, depression, anxiety, etc.), more than for an average person.NASA spends millions of dollars on psychological treatments for astronauts and former astronauts.[67] To date, there is no way to prevent or reduce mental problems caused by extended periods of stay in space.

Due to these mental disorders, the efficiency of their work is impaired and sometimes they are forced to send the astronauts back to Earth, which is very expensive. [68] A Russian expedition to space in 1976 was returned to Earth after the cosmonauts reported a strong odor that caused a fear of fluid leakage, but after a thorough investigation it became clear that there was no leakage or technical malfunction. It was concluded by NASA that the cosmonauts most likely had hallucinations of the smell, which brought many unnecessary wasted expenses.

It is possible that the mental health of astronauts can be affected by the changes in the sensory systems while in prolonged space travel.

During astronauts' spaceflight, they are in a very extreme state where there is no gravity. This given state and the fact that no change is taking place in the environment will result in the weakening of sensory input to the astronauts in all seven senses.

Space flight requires much higher velocities than ground or air transportation, which in turn requires the use of high energy density propellants for launch, and the dissipation of large amounts of energy, usually as heat, for safe reentry through the Earth's atmosphere.

Since rockets carry the potential for fire or explosive destruction, space capsules generally employ some sort of launch escape system, consisting either of a tower-mounted solid fuel rocket to quickly carry the capsule away from the launch vehicle (employed on Mercury, Apollo, and Soyuz), or else ejection seats (employed on Vostok and Gemini) to carry astronauts out of the capsule and away for individual parachute landing. The escape tower is discarded at some point before the launch is complete, at a point where an abort can be performed using the spacecraft's engines.

Such a system is not always practical for multiple crew member vehicles (particularly spaceplanes), depending on location of egress hatch(es). When the single-hatch Vostok capsule was modified to become the 2 or 3-person Voskhod, the single-cosmonaut ejection seat could not be used, and no escape tower system was added. The two Voskhod flights in 1964 and 1965 avoided launch mishaps. The Space Shuttle carried ejection seats and escape hatches for its pilot and copilot in early flights, but these could not be used for passengers who sat below the flight deck on later flights, and so were discontinued.

There have only been two in-flight launch aborts of a crewed flight. The first occurred on Soyuz 18a on 5 April 1975. The abort occurred after the launch escape system had been jettisoned, when the launch vehicle's spent second stage failed to separate before the third stage ignited. The vehicle strayed off course, and the crew separated the spacecraft and fired its engines to pull it away from the errant rocket. Both cosmonauts landed safely. The second occurred on 11 October 2018 with the launch of Soyuz MS-10. Again, both crew members survived.

In the only use of a launch escape system on a crewed flight, the planned Soyuz T-10a launch on 26 September 1983 was aborted by a launch vehicle fire 90 seconds before liftoff. Both cosmonauts aboard landed safely.

The only crew fatality during launch occurred on 28 January 1986, when the Space Shuttle Challenger broke apart 73 seconds after liftoff, due to failure of a solid rocket booster seal which caused separation of the booster and failure of the external fuel tank, resulting in explosion of the fuel. All seven crew members were killed.

The single pilot of Soyuz 1, Vladimir Komarov was killed when his capsule's parachutes failed during an emergency landing on 24 April 1967, causing the capsule to crash.

The crew of seven aboard the Space Shuttle Columbia were killed on reentry after completing a successful mission in space on 1 February 2003. A wing leading edge reinforced carbon-carbon heat shield had been damaged by a piece of frozen external tank foam insulation which broke off and struck the wing during launch. Hot reentry gasses entered and destroyed the wing structure, leading to breakup of the orbiter vehicle.

There are two basic choices for an artificial atmosphere: either an Earth-like mixture of oxygen in an inert gas such as nitrogen or helium, or pure oxygen, which can be used at lower than standard atmospheric pressure. A nitrogen-oxygen mixture is used in the International Space Station and Soyuz spacecraft, while low-pressure pure oxygen is commonly used in space suits for extravehicular activity.

Use of a gas mixture carries risk of decompression sickness (commonly known as "the bends") when transitioning to or from the pure oxygen space suit environment. There have also been instances of injury and fatalities caused by suffocation in the presence of too much nitrogen and not enough oxygen.

A pure oxygen atmosphere carries risk of fire. The original design of the Apollo spacecraft used pure oxygen at greater than atmospheric pressure prior to launch. An electrical fire started in the cabin of Apollo 1 during a ground test at Cape Kennedy Air Force Station Launch Complex 34 on 27 January 1967, and spread rapidly. The high pressure (increased even higher by the fire) prevented removal of the plug door hatch cover in time to rescue the crew. All three, Gus Grissom, Ed White, and Roger Chaffee, were killed.[72] This led NASA to use a nitrogen/oxygen atmosphere before launch, and low pressure pure oxygen only in space.

The March 1966 Gemini 8 mission was aborted in orbit when an attitude control system thruster stuck in the on position, sending the craft into a dangerous spin which threatened the lives of Neil Armstrong and David Scott. Armstrong had to shut the control system off and use the reentry control system to stop the spin. The craft made an emergency reentry and the astronauts landed safely. The most probable cause was determined to be an electrical short due to a static electricity discharge, which caused the thruster to remain powered even when switched off. The control system was modified to put each thruster on its own isolated circuit.

The third lunar landing expedition Apollo 13 in April 1970, was aborted and the lives of the crew, James Lovell, Jack Swigert and Fred Haise, were threatened by failure of a cryogenic liquid oxygen tank en route to the Moon. The tank burst when electrical power was applied to internal stirring fans in the tank, causing the immediate loss of all of its contents, and also damaging the second tank, causing the loss of its remaining oxygen in a span of 130 minutes. This in turn caused loss of electrical power provided by fuel cells to the command spacecraft. The crew managed to return to Earth safely by using the lunar landing craft as a "life boat". The tank failure was determined to be caused by two mistakes. The tank's drain fitting had been damaged when it was dropped during factory testing. This necessitated use of its internal heaters to boil out the oxygen after a pre-launch test, which in turn damaged the fan wiring's electrical insulation, because the thermostats on the heaters did not meet the required voltage rating due to a vendor miscommunication.

The crew of Soyuz 11 were killed on June 30, 1971 by a combination of mechanical malfunctions: they were asphyxiated due to cabin decompression following separation of their descent capsule from the service module. A cabin ventilation valve had been jolted open at an altitude of 168 kilometres (551,000ft) by the stronger than expected shock of explosive separation bolts which were designed to fire sequentially, but in fact had fired simultaneously. The loss of pressure became fatal within about 30 seconds.[73]

As of December2015[update], 22 crew members have died in accidents aboard spacecraft. Over 100 others have died in accidents during activity directly related to spaceflight or testing.

See the rest here:

Human spaceflight - Wikipedia

Darwinism | biology | Britannica.com

Darwinism, theory of the evolutionary mechanism propounded by Charles Darwin as an explanation of organic change. It denotes Darwins specific view that evolution is driven mainly by natural selection.

Beginning in 1837, Darwin proceeded to work on the now well-understood concept that evolution is essentially brought about by the interplay of three principles: (1) variationa liberalizing factor, which Darwin did not attempt to explain, present in all forms of life; (2) hereditythe conservative force that transmits similar organic form from one generation to another; and (3) the struggle for existencewhich determines the variations that will confer advantages in a given environment, thus altering species through a selective reproductive rate.

On the basis of newer knowledge, neo-Darwinism has superseded the earlier concept and purged it of Darwins lingering attachment to the Lamarckian theory of inheritance of acquired characters. Present knowledge of the mechanisms of inheritance are such that modern scientists can distinguish more satisfactorily than Darwin between non-inheritable bodily variation and variation of a genuinely inheritable kind.

The rest is here:

Darwinism | biology | Britannica.com

Darwinism – New World Encyclopedia

Darwinism is a term that is generally considered synonymous with the theory of natural selection. This theory, which was developed by Charles Darwin, holds that natural selection is the directive or creative force of evolution.

The term "Darwinism" also has been applied to the evolutionary theories of Charles Darwin in general, rather than just the theory of natural selection. It may also refer specifically to the role of Charles Darwin as opposed to others in the history of evolutionary thoughtparticularly contrasting Darwin's results with those of earlier theories, such as Lamarckism, or with more modern versions, such as the modern evolutionary synthesis.

According to Ernst Mayr (1991), how the term "Darwinism" has been and is used depends on who is using it and the time period. On the other hand, Harvard evolutionist Stephen Jay Gould, himself a popular writer on evolution, maintains that although the popular literature often equates Darwinism with evolution itself, the scientific community generally agrees that the term "should be restricted to the worldview encompassed by the theory of natural selection" (Gould 1982). That is, the term should be limited to the philosophical concept of Darwin's theory regarding the mechanism for evolutionary change.

Since the time of the publication of Darwin's Origin of Species (1859), Darwinism has confronted challenges from both the scientific and religious communities. Among persistent scientific challenges are the lack of evidences for natural selection as the causal agent of macroevolutionary change; the issue of whether evidences on the microevolutionary level can be extrapolated to the macroevolutionary level; and the surprisingly rapid rate of speciation and prolonged stasis seen in the fossil record (see macroevolution). For religious adherents, the central role accorded "chance" in the evolution of new designs via natural selection is not proved and runs counter to the concept of a creator God. (See Challenges to Darwinism.)

The theory of natural selection is one of two major evolutionary theories advanced by Darwin, the other being the theory of descent with modification. The theory of descent with modification deals with the pattern of evolution: groups of organisms are related with one another, sharing common ancestors from which they have descended. The theory of natural selection (or "theory of modification through natural selection") deals with the process or mechanism of evolution: how the evolutionary change occurred in order to arrive at the pattern.

Natural selection is the mechanism whereby populations of individuals with favorable traits reproduce more than individuals that lack such beneficial traits, and populations of individuals with deleterious traits reproduce less than individuals without such harmful traits. Over time, this results in a trend toward individuals with traits more conducive to their survival and reproduction. According to this theory, natural selection is the directive or creative force of evolution, creating new species and new designs, rather than just a force for weeding out unfit organisms.

In a modern definition of the term, a Darwinian process requires the following schema:

If the entity or organism survives to reproduce, the process restarts. Sometimes, in stricter formulations, it is required that variation and selection act on different entities, variation on the replicator (genotype) and selection on the interactor (phenotype).

Darwinism asserts that in any system given these conditions, by whatever means, evolution is likely to occur. That is, over time, the entities will accumulate complex traits that favor their reproduction. This is called Universal Darwinism, a term coined by Richard Dawkins in his 1972 book Selfish Gene.

Some scientists, including Darwin, maintain that natural selection only works on the level of the organism. Others, such as Gould, believe in hierarchical levels of selectionthat natural selection can work both on individuals or groups of individuals, such that some populations or species may have favorable traits that promote their survival and reproduction over other species or populations. Richard Dawkins maintained that natural selection worked on the level of the gene, although this has been generally discredited in scientific circles.

On the microevolutionary level (change within species), there are evidences that natural selection can produce evolutionary change. For example, changes in gene frequencies can be observed in populations of fruit flies exposed to selective pressures in the laboratory environment. Likewise, systematic changes in various phenotypes within a species, such as color changes in moths, can be observed in field studies. However, evidence that natural selection is the directive force of change in terms of the origination of new designs (such as the development of feathers) or major transitions between higher taxa (such as the evolution of land-dwelling vertebrates from fish) is not observable. Evidence for such macroevolutionary change is limited to extrapolation from changes on the microevolutionary level. A number of top evolutionists, including Gould, challenge the validity of making such extrapolations.

In Darwin's day, there was no rigid definition of the term "Darwinism," and it was used by proponents and opponents of Darwin's biological theory alike to mean whatever they wanted it to in a larger context. In the nineteenth-century context in which Darwin's Origin of Species was first received, "Darwinism" came to stand for an entire range of evolutionary (and often revolutionary) philosophies about both biology and society.

One of the more prominent approaches was that summed up in the phrase "survival of the fittest" by the philosopher Herbert Spencer. This was later taken to be emblematic of Darwinism, even though Spencer's own understanding of evolution was more Lamarckian than Darwinian, and predated the publication of Darwin's theory.

What we now call "Social Darwinism" was, in its day, synonymous with one use of the word "Darwinism"the application of Darwinian principles of "struggle" to society, usually in support of anti-philanthropic political agendas. Another interpretation, one notably favored by Darwin's cousin Francis Galton, was that Darwinism implied that natural selection was apparently no longer working on "civilized" people, thus it was possible for "inferior" strains of people (who would normally be filtered out of the gene pool) to overwhelm the "superior" strains, and corrective measures would have to be undertakenthe foundation of eugenics.

Many of the ideas called "Darwinism" had only a rough resemblance to the theory of Charles Darwin. For example, Ernst Haeckel developed what was known as Darwinisms in Germany; though it should be noted that his ideas was not centered around natural selection at all.

To distinguish themselves from the very loose meaning of Darwinism prevalent in the nineteenth century, those who advocated evolution by natural selection after the death of Darwin became known as neo-Darwinists. The term "neo-Darwinism" itself was coined by George John Romanes in 1896 to designate the Darwinism proposed by August Weismann and Alfred Russel Wallace, in which the exclusivity of natural selection was promoted and the inheritance of acquired characteristics (Larmarckism) was rejected (Mayr 2001; Gould 2002). At that time, near the end of the nineteenth century, there was a strong debate between the neo-Larmarckians and the neo-Darwinians.

The term neo-Darwinism was not terribly popular in the scientific community until after the development of the modern evolutionary synthesis in the 1930s, when the term became synonymous with the synthesis. The modern meaning of neo-Darwinism is not "genealogically linked" to the earlier definition (Gould 2002).

It is felt by some that the term "Darwinism" is sometimes used by creationists as a somewhat derogatory term for "evolutionary biology," in that casting of evolution as an "ism"a doctrine or beliefstrengthens calls for "equal time" for other beliefs, such as creationism or intelligent design. However, top evolutionary scientists, such as Gould and Mayr, have used the term repeatedly, without any derogatory connotations.

In addition to the difficulty of getting evidence for natural selection being the causal agent of change on macroevolutionary levels, as noted above, there are fundamental challenges to the theory of natural selection itself. These come from both the scientific and religious communities.

Such challenges to the theory of natural selection are not a new development. Unlike the theory of descent with modification, which was accepted by the scientific community during Darwin's time and for which substantial evidences have been marshaled, the theory of natural selection was not widely accepted until the mid-1900s and remains controversial even today.

In some cases, key arguments against natural selection being the main or sole agent of evolutionary change come from evolutionary scientists. One concern for example, is whether the origin of new designs and evolutionary trends (macroevolution) can be explained adequately as an extrapolation of changes in gene frequencies within populations (microevolution) (Luria, Gould, and Singer 1981). (See macroevolution for an overview of such critiques, including complications relating to the rate of observed macroevolutionary changes.)

Symbiogenesis, the theory that holds that evolutionary change is initiated by a long-term symbiosis of dissimilar organisms, offers a scientific challenge to the source of variation and reduces the primacy of natural selection as the agent of major evolutionary change. Margulis and Sagan (2002) hold that random mutation is greatly overemphasized as the source of hereditary variation in standard Neo-Darwinistic doctrine. Rather, they maintain, the major source of transmitted variation actually comes from the acquisition of genomesin other words, entire sets of genes, in the form of whole organisms, are acquired and incorporated by other organisms. This long-term biological fusion of organisms, beginning as symbiosis, is held to be the agent of species evolution.

Historically, the strongest opposition to Darwinism, in the sense of being a synonym for the theory of natural selection, has come from those advocating religious viewpoints. In essence, the chance component involved in the creation of new designs, which is inherent in the theory of natural selection, runs counter to the concept of a Supreme Being who has designed and created humans and all phyla. Chance (stochastic processes, randomness) is centrally involved in the theory of natural selection. As noted by eminent evolutionist Ernst Mayr (2001, pp. 120, 228, 281), chance plays an important role in two steps. First, the production of genetic variation "is almost exclusively a chance phenomena." Secondly, chance plays an important role even in "the process of the elimination of less fit individuals," and particularly during periods of mass extinction.

This element of chance counters the view that the development of new evolutionary designs, including humans, was a progressive, purposeful creation by a Creator God. Rather than the end result, according to the theory of natural selection, human beings were an accident, the end of a long, chance-filled process involving adaptations to local environments. There is no higher purpose, no progressive development, just materialistic forces at work. The observed harmony in the world becomes an artifact of such adaptations of organisms to each other and to the local environment. Such views are squarely at odds with many religious interpretations.

A key point of contention between the worldview is, therefore, the issue of variabilityits origin and selection. For a Darwinist, random genetic mutation provides a mechanism of introducing novel variability, and natural selection acts on the variability. For those believing in a creator God, the introduced variability is not random, but directed by the Creator, although natural selection may act on the variability, more in the manner of removing unfit organisms than in any creative role. Some role may also be accorded differential selection, such as mass extinctions. Neither of these worldviewsrandom variation and the purposeless, non-progressive role of natural selection, or purposeful, progressive variationare conclusively proved or unproved by scientific methodology, and both are theoretically possible.

There are some scientists who feel that the importance accorded to genes in natural selection may be overstated. According to Jonathan Wells, genetic expression in developing embryos is impacted by morphology as well, such as membranes and cytoskeletal structure. DNA is seen as providing the means for coding of the proteins, but not necessarily the development of the embryo, the instructions of which must reside elsewhere. It is possible that the importance of sexual reproduction and genetic recombination in introducing variability also may be understated.

The history of conflict between Darwinism and religion often has been exacerbated by confusion and dogmatism on both sides. Evolutionary arguments often are set up against the straw man of a dogmatic, biblical fundamentalism in which God created each species separately and the earth is only 6,000 years old. Thus, an either-or dichotomy is created, in which one believes either in the theory of natural selection or an earth only thousands of years old. However, young-earth creationism is only a small subset of the diversity of religious belief, and theistic, teleological explanations of the origin of species may be much more sophisticated and aligned with scientific findings. On the other hand, evolutionary adherents have sometimes presented an equally dogmatic front, refusing to acknowledge well thought out challenges to the theory of natural selection, or allowing for the possibility of alternative, theistic presentations.

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

The history of this article since it was imported to New World Encyclopedia:

Note: Some restrictions may apply to use of individual images which are separately licensed.

Read the original here:

Darwinism - New World Encyclopedia

An Evolution Definition of Darwinism – thoughtco.com

Charles Darwin is known as the "Father of Evolution" for being the first person to publish his theory not only describing that evolution was a change in species over time but also put together a mechanism for how it works (called natural selection). There is arguably no other evolutionary scholar as well known and revered as Darwin. In fact, the term "Darwinism" has come to be synonymous with the Theory of Evolution, but what really is meant when people say the word Darwinism? And more importantly, what does Darwinism NOT mean?

Darwinism, when it was first put into the lexicon by Thomas Huxley in 1860, was only meant to describe the belief that species change over time. In the most basic of terms, Darwinism became synonymous with Charles Darwin's explanation of evolution and, to an extent, his description of natural selection. These ideas, first published in his arguably most famous book On the Origin of Species, were direct and have stood the test of time. So, originally, Darwinism only included the fact that species change over time due to nature selecting the most favorable adaptations within the population. These individuals with the better adaptations lived long enough to reproduce and pass those traits down to the next generation, ensuring the species' survival.

While many scholars insist this should be the extent of information that the word Darwinism should encompass, it has somewhat evolved itself over time as the Theory of Evolution itself also changed when more data and information became readily available. For instance, Darwin did not know anything about Genetics as it wasn't until after his death that Gregor Mendel did his work with his pea plants and published the data. Many other scientists proposed alternative mechanisms for evolution during a time which became known as neo-Darwinism. However, none of these mechanisms held up over time and Charles Darwin's original assertions were restored as the correct and leading Theory of Evolution. Now, the Modern Synthesis of the Evolutionary Theory is sometimes described using the term "Darwinism", but this is somewhat misleading since it includes not only Genetics but also other topics not explored by Darwin like microevolution via DNA mutations and other molecular biological tenets.

In the United States, Darwinism has taken on a different meaning to the general public. In fact, opponents to the Theory of Evolution have taken the term Darwinism and created a false definition of the word that brings up a negative connotation for many who hear it. The strict Creationists have taken the word hostage and created a new meaning which is often perpetuated by those in the media and others who do not truly understand the real meaning of the word. These anti-evolutionists have taken the word Darwinism to not only mean a change in species over time but have lumped in the origin of life along with it. Darwin did not assert any sort of hypothesis on how life on Earth began in any of this writings and only could describe what he had studied and had evidence to back up. Creationists and other anti-evolutionary parties either misunderstood the term Darwinism or purposefully hijacked it to make it more negative. The term has even been used to describe the origin of the universe by some extremists, which is way beyond the realm of anything Darwin would have made a conjecture on at any time in his life.

In other countries around the world, however, this false definition is not present. In fact, in the United Kingdom where Darwin did most of his work, it is a celebrated and understood term that is commonly used instead of the Theory of Evolution through Natural Selection. There is no ambiguity of the term there and it is used correctly by scientists, the media, and the general public every day.

Read this article:

An Evolution Definition of Darwinism - thoughtco.com

Social Darwinism – Wikipedia

Social Darwinism is the application of the evolutionary concept of natural selection to human society. The term itself emerged in the 1880s, and it gained widespread currency when used after 1944 by opponents of these ways of thinking. The majority of those who have been categorized as social Darwinists did not identify themselves by such a label.[1]

Scholars debate the extent to which the various social Darwinist ideologies reflect Charles Darwin's own views on human social and economic issues. His writings have passages that can be interpreted as opposing aggressive individualism, while other passages appear to promote it.[2] Darwin's early evolutionary views and his opposition to slavery ran counter to many of the claims that social Darwinists would eventually make about the mental capabilities of the poor and colonial indigenes.[3] After the publication of On the Origin of Species in 1859, one strand of Darwins' followers, led by Sir John Lubbock, argued that natural selection ceased to have any noticeable effect on humans once organised societies had been formed.[4] But some scholars argue that Darwin's view gradually changed and came to incorporate views from other theorists such as Herbert Spencer.[5] Spencer published[6] his Lamarckian evolutionary ideas about society before Darwin first published his hypothesis in 1859, and both Spencer and Darwin promoted their own conceptions of moral values. Spencer supported laissez-faire capitalism on the basis of his Lamarckian belief that struggle for survival spurred self-improvement which could be inherited.[7] An important proponent in Germany was Ernst Haeckel, who popularized Darwin's thought (and his personal interpretation of it) and used it as well to contribute to a new creed, the monist movement.

Contents

The term Darwinism was coined by Thomas Henry Huxley in his March 1861 review of On the Origin of Species,[8] and by the 1870s it was used to describe a range of concepts of evolution or development, without any specific commitment to Charles Darwin's theory of natural selection.[9]

The first use of the phrase "social Darwinism" was in Joseph Fisher's 1877 article on The History of Landholding in Ireland which was published in the Transactions of the Royal Historical Society.[10] Fisher was commenting on how a system for borrowing livestock which had been called "tenure" had led to the false impression that the early Irish had already evolved or developed land tenure;[11]

These arrangements did not in any way affect that which we understand by the word " tenure", that is, a man's farm, but they related solely to cattle, which we consider a chattel. It has appeared necessary to devote some space to this subject, inasmuch as that usually acute writer Sir Henry Maine has accepted the word " tenure " in its modern interpretation, and has built up a theory under which the Irish chief " developed " into a feudal baron. I can find nothing in the Brehon laws to warrant this theory of social Darwinism, and believe further study will show that the Cain Saerrath and the Cain Aigillue relate solely to what we now call chattels, and did not in any way affect what we now call the freehold, the possession of the land.

Despite the fact that Social Darwinism bears Charles Darwin's name, it is also linked today with others, notably Herbert Spencer, Thomas Malthus, and Francis Galton, the founder of eugenics. In fact, Spencer was not described as a social Darwinist until the 1930s, long after his death.[12] The social Darwinism term first appeared in Europe in 1880, and journalist Emilie Gautier had coined the term with reference to a health conference in Berlin 1877.[10] Around 1900 it was used by sociologists, some being opposed to the concept.[13] The term was popularized in the United States in 1944 by the American historian Richard Hofstadter who used it in the ideological war effort against fascism to denote a reactionary creed which promoted competitive strife, racism and chauvinism. Hofstadter later also recognized (what he saw as) the influence of Darwinist and other evolutionary ideas upon those with collectivist views, enough to devise a term for the phenomenon, "Darwinist collectivism".[14] Before Hofstadter's work the use of the term "social Darwinism" in English academic journals was quite rare.[15] In fact,

... there is considerable evidence that the entire concept of "social Darwinism" as we know it today was virtually invented by Richard Hofstadter. Eric Foner, in an introduction to a then-new edition of Hofstadter's book published in the early 1990s, declines to go quite that far. "Hofstadter did not invent the term Social Darwinism", Foner writes, "which originated in Europe in the 1860s and crossed the Atlantic in the early twentieth century. But before he wrote, it was used only on rare occasions; he made it a standard shorthand for a complex of late-nineteenth-century ideas, a familiar part of the lexicon of social thought."

Social Darwinism has many definitions, and some of them are incompatible with each other. As such, social Darwinism has been criticized for being an inconsistent philosophy, which does not lead to any clear political conclusions. For example, The Concise Oxford Dictionary of Politics states:

Part of the difficulty in establishing sensible and consistent usage is that commitment to the biology of natural selection and to 'survival of the fittest' entailed nothing uniform either for sociological method or for political doctrine. A 'social Darwinist' could just as well be a defender of laissez-faire as a defender of state socialism, just as much an imperialist as a domestic eugenist.[17]

The term "Social Darwinism" has rarely been used by advocates of the supposed ideologies or ideas; instead it has almost always been used pejoratively by its opponents.[1] The term draws upon the common meaning of Darwinism, which includes a range of evolutionary views, but in the late 19th century was applied more specifically to natural selection as first advanced by Charles Darwin to explain speciation in populations of organisms. The process includes competition between individuals for limited resources, popularly but inaccurately described by the phrase "survival of the fittest", a term coined by sociologist Herbert Spencer.

Creationists have often maintained that Social Darwinismleading to policies designed to reward the most competitiveis a logical consequence of "Darwinism" (the theory of natural selection in biology).[18]Biologists and historians have stated that this is a fallacy of appeal to nature and should not be taken to imply that this phenomenon ought to be used as a moral guide in human society.[19] While there are historical links between the popularization of Darwin's theory and forms of social Darwinism, social Darwinism is not a necessary consequence of the principles of biological evolution.

While the term has been applied to the claim that Darwin's theory of evolution by natural selection can be used to understand the social endurance of a nation or country, Social Darwinism commonly refers to ideas that predate Darwin's publication of On the Origin of Species. Others whose ideas are given the label include the 18th century clergyman Thomas Malthus, and Darwin's cousin Francis Galton who founded eugenics towards the end of the 19th century.

The expansion of the British Empire fitted in with the broader notion of social Darwinism used from the 1870s onwards to account for the remarkable and universal phenomenon of "the Anglo-Saxon overflowing his boundaries", as phrased by the late-Victorian sociologist Benjamin Kidd in Social Evolution, published in 1894.[20] The concept also proved useful to justify what was seen by some as the inevitable extermination of "the weaker races who disappear before the stronger" not so much "through the effects of our vices upon them" as "what may be called the virtues of our civilisation."

Herbert Spencer's ideas, like those of evolutionary progressivism, stemmed from his reading of Thomas Malthus, and his later theories were influenced by those of Darwin. However, Spencer's major work, Progress: Its Law and Cause (1857), was released two years before the publication of Darwin's On the Origin of Species, and First Principles was printed in 1860.

In The Social Organism (1860), Spencer compares society to a living organism and argues that, just as biological organisms evolve through natural selection, society evolves and increases in complexity through analogous processes.[21]

In many ways, Spencer's theory of cosmic evolution has much more in common with the works of Lamarck and Auguste Comte's positivism than with Darwin's.

Jeff Riggenbach argues that Spencer's view was that culture and education made a sort of Lamarckism possible[16] and notes that Herbert Spencer was a proponent of private charity.[16] However, the legacy of his social Darwinism was less than charitable.[22]

Spencer's work also served to renew interest in the work of Malthus. While Malthus's work does not itself qualify as social Darwinism, his 1798 work An Essay on the Principle of Population, was incredibly popular and widely read by social Darwinists. In that book, for example, the author argued that as an increasing population would normally outgrow its food supply, this would result in the starvation of the weakest and a Malthusian catastrophe.

According to Michael Ruse, Darwin read Malthus' famous Essay on a Principle of Population in 1838, four years after Malthus' death. Malthus himself anticipated the social Darwinists in suggesting that charity could exacerbate social problems.

Another of these social interpretations of Darwin's biological views, later known as eugenics, was put forth by Darwin's cousin, Francis Galton, in 1865 and 1869. Galton argued that just as physical traits were clearly inherited among generations of people, the same could be said for mental qualities (genius and talent). Galton argued that social morals needed to change so that heredity was a conscious decision in order to avoid both the over-breeding by less fit members of society and the under-breeding of the more fit ones.

In Galton's view, social institutions such as welfare and insane asylums were allowing inferior humans to survive and reproduce at levels faster than the more "superior" humans in respectable society, and if corrections were not soon taken, society would be awash with "inferiors". Darwin read his cousin's work with interest, and devoted sections of Descent of Man to discussion of Galton's theories. Neither Galton nor Darwin, though, advocated any eugenic policies restricting reproduction, due to their Whiggish distrust of government.[23]

Friedrich Nietzsche's philosophy addressed the question of artificial selection, yet Nietzsche's principles did not concur with Darwinian theories of natural selection. Nietzsche's point of view on sickness and health, in particular, opposed him to the concept of biological adaptation as forged by Spencer's "fitness". Nietzsche criticized Haeckel, Spencer, and Darwin, sometimes under the same banner by maintaining that in specific cases, sickness was necessary and even helpful.[24] Thus, he wrote:

Wherever progress is to ensue, deviating natures are of greatest importance. Every progress of the whole must be preceded by a partial weakening. The strongest natures retain the type, the weaker ones help to advance it.Something similar also happens in the individual. There is rarely a degeneration, a truncation, or even a vice or any physical or moral loss without an advantage somewhere else. In a warlike and restless clan, for example, the sicklier man may have occasion to be alone, and may therefore become quieter and wiser; the one-eyed man will have one eye the stronger; the blind man will see deeper inwardly, and certainly hear better. To this extent, the famous theory of the survival of the fittest does not seem to me to be the only viewpoint from which to explain the progress of strengthening of a man or of a race.[25]

Ernst Haeckel's recapitulation theory was not Darwinism, but rather attempted to combine the ideas of Goethe, Lamarck and Darwin. It was adopted by emerging social sciences to support the concept that non-European societies were "primitive", in an early stage of development towards the European ideal, but since then it has been heavily refuted on many fronts.[26] Haeckel's works led to the formation of the Monist League in 1904 with many prominent citizens among its members, including the Nobel Prize winner Wilhelm Ostwald.

The simpler aspects of social Darwinism followed the earlier Malthusian ideas that humans, especially males, require competition in their lives in order to survive in the future. Further, the poor should have to provide for themselves and not be given any aid. However, amidst this climate, most social Darwinists of the early twentieth century actually supported better working conditions and salaries. Such measures would grant the poor a better chance to provide for themselves yet still distinguish those who are capable of succeeding from those who are poor out of laziness, weakness, or inferiority.

"Social Darwinism" was first described by Eduard Oscar Schmidt of the University of Strasbourg, reporting at a scientific and medical conference held in Munich in 1877. He noted how socialists, although opponents of Darwin's theory, used it to add force to their political arguments. Schmidt's essay first appeared in English in Popular Science in March 1879.[27] There followed an anarchist tract published in Paris in 1880 entitled "Le darwinisme social" by mile Gautier. However, the use of the term was very rareat least in the English-speaking world (Hodgson, 2004)[28]until the American historian Richard Hofstadter published his influential Social Darwinism in American Thought (1944) during World War II.

Hypotheses of social evolution and cultural evolution were common in Europe. The Enlightenment thinkers who preceded Darwin, such as Hegel, often argued that societies progressed through stages of increasing development. Earlier thinkers also emphasized conflict as an inherent feature of social life. Thomas Hobbes's 17th century portrayal of the state of nature seems analogous to the competition for natural resources described by Darwin. Social Darwinism is distinct from other theories of social change because of the way it draws Darwin's distinctive ideas from the field of biology into social studies.

Darwin, unlike Hobbes, believed that this struggle for natural resources allowed individuals with certain physical and mental traits to succeed more frequently than others, and that these traits accumulated in the population over time, which under certain conditions could lead to the descendants being so different that they would be defined as a new species.

However, Darwin felt that "social instincts" such as "sympathy" and "moral sentiments" also evolved through natural selection, and that these resulted in the strengthening of societies in which they occurred, so much so that he wrote about it in Descent of Man:

The following proposition seems to me in a high degree probablenamely, that any animal whatever, endowed with well-marked social instincts, the parental and filial affections being here included, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well, or nearly as well developed, as in man. For, firstly, the social instincts lead an animal to take pleasure in the society of its fellows, to feel a certain amount of sympathy with them, and to perform various services for them.[29]

Spencer proved to be a popular figure in the 1880s primarily because his application of evolution to areas of human endeavor promoted an optimistic view of the future as inevitably becoming better. In the United States, writers and thinkers of the gilded age such as Edward L. Youmans, William Graham Sumner, John Fiske, John W. Burgess, and others developed theories of social evolution as a result of their exposure to the works of Darwin and Spencer.

In 1883, Sumner published a highly influential pamphlet entitled "What Social Classes Owe to Each Other", in which he insisted that the social classes owe each other nothing, synthesizing Darwin's findings with free enterprise Capitalism for his justification.[citation needed] According to Sumner, those who feel an obligation to provide assistance to those unequipped or under-equipped to compete for resources, will lead to a country in which the weak and inferior are encouraged to breed more like them, eventually dragging the country down. Sumner also believed that the best equipped to win the struggle for existence was the American businessman, and concluded that taxes and regulations serve as dangers to his survival. This pamphlet makes no mention of Darwinism, and only refers to Darwin in a statement on the meaning of liberty, that "There never has been any man, from the primitive barbarian up to a Humboldt or a Darwin, who could do as he had a mind to."[30]

Sumner never fully embraced Darwinian ideas, and some contemporary historians do not believe that Sumner ever actually believed in social Darwinism.[31] The great majority of American businessmen rejected the anti-philanthropic implications of the theory. Instead they gave millions to build schools, colleges, hospitals, art institutes, parks and many other institutions. Andrew Carnegie, who admired Spencer, was the leading philanthropist in the world (18901920), and a major leader against imperialism and warfare.[32]

H. G. Wells was heavily influenced by Darwinist thoughts, and novelist Jack London wrote stories of survival that incorporated his views on social Darwinism.[33] Film director Stanley Kubrick has been described as having held social Darwinist opinions.[34]

Social Darwinism has influenced political, public health and social movements in Japan since the late 19th and early 20th century. Social Darwinism was originally brought to Japan through the works of Francis Galton and Ernst Haeckel as well as United States, British and French Lamarkian eugenic written studies of the late 19th and early 20th centuries.[35] Eugenism as a science was hotly debated at the beginning of the 20th century, in Jinsei-Der Mensch, the first eugenics journal in the empire. As Japan sought to close ranks with the west, this practice was adopted wholesale along with colonialism and its justifications.

Social Darwinism was formally introduced to China through the translation by Yan Fu of Huxley's Evolution and Ethics, in the course of an extensive series of translations of influential Western thought.[36] Yan's translation strongly impacted Chinese scholars because he added national elements not found in the original. Yan Fu criticized Huxley from the perspective of Spencerian social Darwinism in his own annotations to the translation.[37] He understood Spencer's sociology as "not merely analytical and descriptive, but prescriptive as well", and saw Spencer building on Darwin, whom Yan summarized thus:

By the 1920s, social Darwinism found expression in the promotion of eugenics by the Chinese sociologist Pan Guangdan.When Chiang Kai-shek started the New Life movement in 1934, he

Social evolution theories in Germany gained large popularity in the 1860s and had a strong antiestablishment connotation first. Social Darwinism allowed people to counter the connection of Thron und Altar, the intertwined establishment of clergy and nobility, and provided as well the idea of progressive change and evolution of society as a whole. Ernst Haeckel propagated both Darwinism as a part of natural history and as a suitable base for a modern Weltanschauung, a world view based on scientific reasoning in his Monist League. Friedrich von Hellwald had a strong role in popularizing it in Austria. Darwin's work served as a catalyst to popularize evolutionary thinking.[40]

A sort of aristocratic turn, the use of the struggle for life as a base of Social Darwinism sensu stricto came up after 1900 with Alexander Tilles 1895 work Entwicklungsethik (Ethics of Evolution) which asked to move from Darwin till Nietzsche. Further interpretations moved to ideologies propagating a racist and hierarchical society and provided ground for the later radical versions of Social Darwinism.[40]

Social Darwinism is often cited as an ideological justification for much of 18th/19th century European enslavement and colonization of Third World countries;[41] it has often even found its way into the intellectual foundations of public education in neo-colonized countries.[42]

More here:

Social Darwinism - Wikipedia