The Australian Libertarian Society presents: The 7th …

Join us in Sydney from 23-26 May 2019 for the biggest and best pro-liberty event in the Asia Pacific Region, hosted by the Australian Libertarian Society (ALS) and the Australian Taxpayers Alliance (ATA).

We are expecting well over 400 activists, thought leaders, business representatives, and political influencers will hear from some of the best speakers from not just Australia, but around the world.

The highlight of the conference will be the presentation of the Annual Liberty Awards at the Gala Dinner on Saturday night.

In addition to the two hosting organisations, the Friedman Conference is supported by a range of sponsors and contributors, including the Institute of Public Affairs and Connor Court Publishing.

Watch this event page for regular updates and speaker announcements. If you have any suggestions or questions then please contact ATA Executive Director Tim Andrews, ALS President John Humphreys, and/or ALS Executive Director Stuart Hatch.

We look forward to you seeing you next year in Sydney!

Original post:

The Australian Libertarian Society presents: The 7th ...

Human or Superhuman? – National Catholic Register

Church Teaching on Genetic Engineering: May 6 issue column.

Human genetic engineering has always been the stuff of science-fiction novels and blockbuster Hollywood films. Except that it is no longer confined to books and movies.

Scientists and doctors are already attempting to genetically alter human beings and our cells. And whether you realize it or not, you and your children are being bombarded in popular media with mixed messages on the ethics surrounding human genetic engineering.

So what does the Church say about the genetic engineering of humans?

The majority of Catholics would likely say that the Church opposes any genetic modification in humans. But that is not what our Church teaches. Actually, the Church does support human genetic engineering; it just has to be the right kind.

Surprised? Most Catholics probably are.

To understand Catholic Church teaching on genetic engineering, it is critical to understand an important distinction under the umbrella of genetic engineering: the difference between therapy and enhancement. It is a distinction that every Catholic should learn to identify, both in the real world and in fiction. Gene therapy and genetic enhancement are technically both genetic engineering, but there are important moral differences.

For decades, researchers have worked toward using genetic modification called gene therapy to cure devastating genetic diseases. Gene therapy delivers a copy of a normal gene into the cells of a patient in an attempt to correct a defective gene. This genetic alteration would then cure or slow the progress of that disease. In many cases, the added gene would produce a protein that is missing or not functioning in a patient because of a genetic mutation.

One of the best examples where researchers hope gene therapy will be able to treat genetic disease is Duchenne Muscular Dystrophy or DMD. DMD is an inherited disorder where a patient cannot make dystrophin, a protein that supports muscle tissue. DMD strikes in early childhood and slowly degrades all muscle tissue, including heart muscle. The average life expectancy of someone with DMD is only 30 years.

Over the last few years, researchers have been studying mice with DMD. They have been successful in inserting the normal dystrophin gene into the DNA of the mice. These genetically engineered mice were then able to produce eight times more dystrophin than mice with DMD. More dystrophin means more muscle, which, in the case of a devastating muscle-wasting disease like DMD, would be a lifesaver.

Almost immediately after the announcement of this breakthrough, the researchers were inundated with calls from bodybuilders and athletes who wanted to be genetically modified to make more muscle.

The callers essentially wanted to take the genetic engineering designed to treat a fatal disease and apply it to their already healthy bodies.

Genetically engineering a normal man who wants more muscle to improve his athletic ability is no longer gene therapy. Instead, it is genetic enhancement.

Genetic enhancement would take an otherwise healthy person and genetically modify him to be more than human, not just in strength, but also in intelligence, beauty or any other desirable trait.

So why is the distinction between gene therapy and genetic enhancement important? The Catholic Church is clear that gene therapy is good, while genetic enhancement is morally wrong.

Why? Because gene therapy seeks to return a patient to normal human functioning. Genetic enhancement, on the other hand, assumes that mans normal state is flawed and lacking, that mans natural biology needs enhancing. Genetic enhancement would intentionally and fundamentally alter a human being in ways not possible by nature, which means in ways God never intended.

The goal of medical intervention must always be the natural development of a human being, respecting the patients inherent dignity and worth. Enhancement destroys that inherent dignity by completely rejecting mankinds natural biology. From the Charter for Health Care Workers by the Pontifical Council for Pastoral Assistance:

In moral evaluation, a distinction must be made between strictly therapeutic manipulation, which aims to cure illnesses caused by genetic or chromosome anomalies (genetic therapy), and manipulation, altering the human genetic patrimony. A curative intervention, which is also called genetic surgery, will be considered desirable in principle, provided its purpose is the real promotion of the personal well-being of the individual, without damaging his integrity or worsening his condition of life.

On the other hand, interventions which are not directly curative, the purpose of which is the production of human beings selected according to sex or other predetermined qualities, which change the genotype of the individual and of the human species, are contrary to the personal dignity of the human being, to his integrity and to his identity. Therefore, they can be in no way justified on the pretext that they will produce some beneficial results for humanity in the future. No social or scientific usefulness and no ideological purpose could ever justify an intervention on the human genome unless it be therapeutic; that is, its finality must be the natural development of the human being.

So genetic engineering to cure or treat disease or disability is good.

Genetic engineering to change the fundamental nature of mankind, to take an otherwise healthy person and engineer him to be more than human is bad.

There is much misinformation surrounding the Catholic Churchs teaching on human genetic engineering. One example is in a piece in The New York Times by David Frum. Frum states that John Paul II supported genetic enhancement and, therefore, the Church does as well. Frum performs a sleight of hand, whether intentional or not. See if you can spot it:

The anti-abortion instincts of many conservatives naturally incline them to look at such [genetic engineering] techniques with suspicion and, indeed, it is certainly easy to imagine how they might be abused. Yet in an important address delivered as long ago as 1983, Pope John Paul II argued that genetic enhancement was permissible indeed, laudable even from a Catholic point of view, as long as it met certain basic moral rules. Among those rules: that these therapies be available to all.

Frum discusses enhancement and therapy as if they are the same. He equates them using the words therapies and enhancement interchangeably. Because John Paul II praised gene therapy, the assumption was that he must laud genetic enhancement as well. This confusion is common because, many argue, there is not a technical difference between therapy and enhancement, so lumping them together is acceptable.

Catholics must not fall into this trap. Philosophically, gene therapy and genetic enhancement are different. One seeks to return normal functioning; the other seeks to take normal functioning and alter it to be abnormal.

There are practical differences between therapy and enhancement as well. Genetic engineering has already had unintended consequences and unforeseen side effects. Gene-therapy trials to cure disease in humans have been going on for decades. All has not gone as planned. Some patients have developed cancer as a result of these attempts at genetically altering their cells.

In 1999, a boy named Jesse Gelsinger was injected with a virus designed to deliver a gene to treat a genetic liver disease. Jesse could have continued with his current treatment regime of medication, but he wanted to help others with the same disorder, so he enrolled in the trial. Tragically, Jesse died four days later from the gene therapy he received.

In 2007, 36-year-old mother Jolee Mohr died while participating in a gene-therapy trial. She had rheumatoid arthritis, and just after the gene therapy (also using a virus for delivery) was injected into her knee, she developed a sudden infection that caused organ failure. An investigation concluded that her death was likely not a direct result of the gene therapy, but some experts think that with something as treatable as rheumatoid arthritis she should never have been entered into such a trial. They argued that, because of the risks, gene therapy should only be used for treating life-threatening illness.

In other words, genetic engineering should only be tried in cases where the benefits will outweigh the risks, as in the treatment of life-threatening conditions. Currently, gene therapy is being undertaken because the risk of the genetic engineering is outweighed by the devastation of the disease it is attempting to cure. With the risks inherent in genetic modification, it should never be attempted on an otherwise healthy person.

You may be thinking that such risky enhancement experiments would never happen. Scientists and doctors would never attempt genetic modifications in healthy humans; human enhancements only exist in science fiction and will stay there. Except science and academia are already looking into it.

The National Institutes of Health (NIH) has awarded Maxwell Mehlman, director of the Law-Medicine Center at Case Western Reserve University School of Law, $773,000 to develop standards for tests on human subjects in genetic-enhancement research. Research that would take otherwise normal humans and make them smarter, stronger or better-looking. If the existing human-trial standards cannot meet the ethical conditions needed for genetic-enhancement research, Mehlman has been asked to recommend changes.

In a recent paper in the journal Ethics, Policy & Environment, S. Matthew Liao, a professor of philosophy and bioethics at New York University, explored ways humanity can change its nature to combat climate change. One of the suggestions Liao discusses is to genetically engineer human eyes to be like cat eyes so we can all see in the dark. This would reduce the need for lighting and reduce energy usage. Liao also discusses genetically modifying our offspring to be smaller so they eat less and use fewer resources.

Of course, Liao insists these are just discussions of possibilities, but what begins as discussions among academics often becomes common among the masses.

Once gene therapy has been perfected and becomes a mainstream treatment for genetic disease, the cries for genetic enhancement will be deafening. The masses will scream that they can do to their bodies as they wish and they wish to no longer be simply human. They wish to be super human.

And with conscience clauses for medical professionals under attack, doctors and nurses may be unable to morally object to genetically altering their perfectly healthy patient or a parents perfectly healthy child.

It is important for Catholics to not turn their backs on technical advancements in biotechnology simply because the advancements are complex.

We can still influence the public consciousness when it comes to human genetic engineering. We are obliged to loudly draw the line between therapy and enhancement otherwise, society, like Frum, will confuse the two.

It is not too late to make sure medically relevant genetic engineering does not turn into engineering that forever changes the nature of man.

Rebecca Taylor is a clinicallaboratory specialist inmolecular biology.She writes about bioethics on her

blog Mary Meets Dolly.

Here is the original post:

Human or Superhuman? - National Catholic Register

Tor Browser 7.5.5 Download – TechSpot

Tor is a network of virtual tunnels that allows people and groups to improve their privacy and security on the Internet. It also enables software developers to create new communication tools with built-in privacy features. Tor provides the foundation for a range of applications that allow organizations and individuals to share information over public networks without compromising their privacy.

Individuals use Tor to keep websites from tracking them and their family members, or to connect to news sites, instant messaging services, or the like when these are blocked by their local Internet providers. Tor's hidden services let users publish web sites and other services without needing to reveal the location of the site. Individuals also use Tor for socially sensitive communication: chat rooms and web forums for rape and abuse survivors, or people with illnesses.

Journalists use Tor to communicate more safely with whistleblowers and dissidents. Non-governmental organizations (NGOs) use Tor to allow their workers to connect to their home website while they're in a foreign country, without notifying everybody nearby that they're working with that organization.

Groups such as Indymedia recommend Tor for safeguarding their members' online privacy and security. Activist groups like the Electronic Frontier Foundation (EFF) recommend Tor as a mechanism for maintaining civil liberties online. Corporations use Tor as a safe way to conduct competitive analysis, and to protect sensitive procurement patterns from eavesdroppers. They also use it to replace traditional VPNs, which reveal the exact amount and timing of communication. Which locations have employees working late? Which locations have employees consulting job-hunting websites? Which research divisions are communicating with the company's patent lawyers?

A branch of the U.S. Navy uses Tor for open source intelligence gathering, and one of its teams used Tor while deployed in the Middle East recently. Law enforcement uses Tor for visiting or surveilling web sites without leaving government IP addresses in their web logs, and for security during sting operations.

What's New:

All platforms

The Tor Browser Team is proud to announce the first stable release in the 7.5 series. This release is available from the Tor Browser Project page and also from our distribution directory. This release features important security updates to Firefox.

Apart from the usual Firefox security updates it contains some notable improvements compared to the 7.0 series. Here are the highlights:

We redesigned parts of the Tor Browser user interface. One of the major improvements for our users is our new Tor Launcher experience. This work is based on the findings published at 'A Usability Evaluation of Tor Launcher', a paper done by Linda Lee et al. At our work we iterated on the redesign proposed by the research, improving it even further. Here are the main changes we would like to highlight:

Welcome Screen

Our old screen had way too much information for the users, leading many of them to spend great time confused about what to do. Some users at the paper experiment spent up to 40min confused about what they needed to be doing here. Besides simplifying the screen and the message, to make it easier for the user to know if they need to configure anything or not, we also did a 'brand refresh' bringing our logo to the launcher.

Censorship circumvention configuration

This is one of the most important steps for a user who is trying to connect to Tor while their network is censoring Tor. We also worked really hard to make sure the UI text would make it easy for the user to understand what a bridge is for and how to configure to use one. Another update was a little tip we added at the drop-down menu (as you can see below) for which bridge to use in countries that have very sophisticated censorship methods.

Proxy help information

The proxy settings at our Tor Launcher configuration wizard is an important feature for users who are under a network that demands such configuration. But it can also lead to a lot of confusion if the user has no idea what a proxy is. Since it is a very important feature for users, we decided to keep it in the main configuration screen and introduced a help prompt with an explanation of when someone would need such configuration.

As part of our work with the UX team, we will also be coordinating user testing of this new UI to continue iterating and make sure we are always improving our users' experience. We are also planning a series of improvements not only for the Tor Launcher flow but for the whole browser experience (once you are connected to Tor) including a new user onboarding flow. And last but not least we are streamlining both our mobile and desktop experience: Tor Browser 7.5 adapted the security slider design we did for mobile bringing the improved user experience to the desktop as well.

Other

Complete release notes Tor Browser 7.5:

All Platforms

Windows

OS X

Linux

Android

Build System

Previous versions:

See the original post here:

Tor Browser 7.5.5 Download - TechSpot

Rationalism, Continental | Internet Encyclopedia of Philosophy

Continental rationalism is a retrospective category used to group together certain philosophers working in continental Europe in the 17th and 18th centuries, in particular, Descartes, Spinoza, and Leibniz, especially as they can be regarded in contrast with representatives of British empiricism, most notably, Locke, Berkeley, and Hume. Whereas the British empiricists held that all knowledge has its origin in, and is limited by, experience, the Continental rationalists thought that knowledge has its foundation in the scrutiny and orderly deployment of ideas and principles proper to the mind itself. The rationalists did not spurn experience as is sometimes mistakenly alleged; they were thoroughly immersed in the rapid developments of the new science, and in some cases led those developments. They held, however, that experience alone, while useful in practical matters, provides an inadequate foundation for genuine knowledge.

The fact that Continental rationalism and British empiricism are retrospectively applied terms does not mean that the distinction that they signify is anachronistic. Leibnizs New Essays on Human Understanding, for instance, outlines stark contrasts between his own way of thinking and that of Locke, which track many features of the rationalist/empiricist distinction as it tends to be applied in retrospect. There was no rationalist creed or manifesto to which Descartes, Spinoza, and Leibniz all subscribed (nor, for that matter, was there an empiricist one). Nevertheless, with due caution, it is possible to use the Continental rationalism category (and its empiricist counterpart) to highlight significant points of convergence in the philosophies of Descartes, Spinoza, and Leibniz, inter alia. These include: (1) a doctrine of innate ideas; (2) the application of mathematical method to philosophy; and (3) the use of a priori principles in the construction of substance-based metaphysical systems.

According to the Historisches Worterbuch der Philosophie, the word rationaliste appears in 16th century France, as early as 1539, in opposition to empirique. In his New Organon, first published in 1620 (in Latin), Francis Bacon juxtaposes rationalism and empiricism in memorable terms:

Those who have treated of the sciences have been either empiricists [Empirici] or dogmatists [Dogmatici]. Empiricists [Empirici], like ants, simply accumulate and use; Rationalists [Rationales], like spiders, spin webs from themselves; the way of the bee is in between: it takes material from the flowers of the garden and the field; but it has the ability to convert and digest them. (The New Organon, p. 79; Spedding, 1, 201)

Bacons association of rationalists with dogmatists in this passage foreshadows Kants use of the term dogmatisch in reference, especially, to the Wolffian brand of rationalist philosophy prevalent in 18th century Germany. Nevertheless, Bacons use of rationales does not refer to Continental rationalism, which developed only after the New Organon, but rather to the Scholastic philosophy that dominated the medieval period. Moreover, while Bacon is, in retrospect, often considered the father of modern empiricism, the above-quoted passage shows him no friendlier to the empirici than to the rationales. Thus, Bacons juxtaposition of rationalism and empiricism should not be confused with the distinction as it develops over the course of the 17th and 18th centuries, although his imagery is certainly suggestive.

The distinction appears in an influential form as the backdrop to Kants critical philosophy (which is often loosely understood as a kind of synthesis of certain aspects of Continental rationalism and British empiricism) at the end of the 18th century. However, it was not until the time of Hegel in the first half of the 19th century that the terms rationalism and empiricism were applied to separating the figures of the 17th and 18th centuries into contrasting epistemological camps in the fashion with which we are familiar today. In his Lectures on the History of Philosophy, Hegel describes an opposition between a priori thought, on the one hand, according to which the determinations which should be valid for thought should be taken from thought itself, and, on the other hand, the determination that we must begin and end and think, etc., from experience. He describes this as the opposition between Rationalismus and Empirismus (Werke 20, 121).

Perhaps the best recognized and most commonly made distinction between rationalists and empiricists concerns the question of the source of ideas. Whereas rationalists tend to think (with some exceptions discussed below) that some ideas, at least, such as the idea of God, are innate, empiricists hold that all ideas come from experience. Although the rationalists tend to be remembered for their positive doctrine concerning innate ideas, their assertions are matched by a rejection of the notion that all ideas can be accounted for on the basis of experience alone. In some Continental rationalists, especially in Spinoza, the negative doctrine is more apparent than the positive. The distinction is worth bearing in mind, in order to avoid the very false impression that the rationalists held to innate ideas because the empiricist alternative had not come along yet. (In general, the British empiricists came after the rationalists.) The Aristotelian doctrine, nihil in intellectu nisi prius in sensu (nothing in the intellect unless first in the senses), had been dominant for centuries, and it was in reaction against this that the rationalists revived in modified form the contrasting Platonic doctrine of innate ideas.

Descartes distinguishes between three kinds of ideas: adventitious (adventitiae), factitious (factae), and innate (innatae). As an example of an adventitious idea, Descartes gives the common idea of the sun (yellow, bright, round) as it is perceived through the senses. As an example of a factitious idea, Descartes cites the idea of the sun constructed via astronomical reasoning (vast, gaseous body). According to Descartes, all ideas which represent true, immutable, and eternal essences are innate. Innate ideas, for Descartes, include the idea of God, the mind, and mathematical truths, such as the fact that it pertains to the nature of a triangle that its three angles equal two right angles.

By conceiving some ideas as innate, Descartes does not mean that children are born with fully actualized conceptions of, for example, triangles and their properties. This is a common misconception of the rationalist doctrine of innate ideas. Descartes strives to correct it in Comments on a Certain Broadsheet, where he compares the innateness of ideas in the mind to the tendency which some babies are born with to contract certain diseases: it is not so much that the babies of such families suffer from these diseases in their mothers womb, but simply that they are born with a certain faculty or tendency to contract them (CSM I, 304). In other words, innate ideas exist in the mind potentially, as tendencies; they are then actualized by means of active thought under certain circumstances, such as seeing a triangular figure.

At various points, Descartes defends his doctrine of innate ideas against philosophers (Hobbes, Gassendi, and Regius, inter alia) who hold that all ideas enter the mind through the senses, and that there are no ideas apart from images. Descartes is relatively consistent on his reasons for thinking that some ideas, at least, must be innate. His principal line of argument proceeds by showing that there are certain ideas, for example, the idea of a triangle, that cannot be either adventitious or factitious; since ideas are either adventitious, factitious, or innate, by process of elimination, such ideas must be innate.

Take Descartes favorite example of the idea of a triangle. The argument that the idea of a triangle cannot be adventitious proceeds roughly as follows. A triangle is composed of straight lines. However, straight lines never enter our mind via the senses, since when we examine straight lines under a magnifying lens, they turn out to be wavy or irregular in some way. Since we cannot derive the idea of straight lines from the senses, we cannot derive the idea of a true triangle, which is made up of straight lines, through the senses. Sometimes Descartes makes the point in slightly different terms by insisting that there is no similarity between the corporeal motions of the sense organs and the ideas formed in the mind on the occasion of those motions (CSM I, 304; CSMK III, 187). One such dissimilarity, which is particularly striking, is the contrast between the particularity of all corporeal motions and the universality that pure ideas can attain when conjoined to form necessary truths. Descartes makes this point in clear terms to Regius:

I would like our author to tell me what the corporeal motion is that is capable of forming some common notion to the effect that things which are equal to a third thing are equal to each other, or any other he cares to take. For all such motions are particular, whereas the common notions are universal and bear no affinity with, or relation to, the motions. (CSM I, 304-5)

Next, Descartes has to show that the idea of a triangle is not factitious. This is where the doctrine of true and immutable natures comes in. For Descartes, if, for example, the idea that the three angles of a triangle are equal to two right angles were his own invention, it would be mutable, like the idea of a gold mountain, which can be changed at whim into the idea of a silver mountain. Instead, when Descartes thinks about his idea of a triangle, he is able to discover eternal properties of it that are not mutable in this way; hence, they are not invented (CSMK III, 184).

Since, therefore, the triangle can be neither adventitious nor factitious, it must be innate; that is to say, the mind has an innate tendency or power to form this idea from its own purely intellectual resources when prompted to do so.

Descartes insistence that there is no similarity between the corporeal motions of our sense organs and the ideas formed in the mind on the occasion of those motions raises a difficulty for understanding how any ideas could be adventitious. Since none of our ideas have any similarity to the corporeal motions of the sense organs even the idea of motion itself it seems that no ideas can in fact have their origin in a source external to the mind. The reason that we have an idea of heat in the presence of fire, for instance, is not, then, because the idea is somehow transmitted by the fire. Rather, Descartes thinks that God designed us in such a way that we form the idea of heat on the occasion of certain corporeal motions in our sense organs (and we form other sensory ideas on the occasion of other corporeal motions). Thus, there is a sense in which, for Descartes, all ideas are innate, and his tripartite division between kinds of ideas becomes difficult to maintain.

Per his so-called doctrine of parallelism, Spinoza conceives the mind and the body as one and the same thing, conceived under different attributes (to wit, thought and extension). (See Benedict de Spinoza: Metaphysics.) As a result, Spinoza denies that there is any causal interaction between mind and body, and so Spinoza denies that any ideas are caused by bodily change. Just as bodies can be affected only by other bodies, so ideas can be affected only by other ideas. This does not mean, however, that all ideas are innate for Spinoza, as they very clearly are for Leibniz (see below). Just as the body can be conceived to be affected by external objects conceived under the attribute of extension (that is, as bodies), so the mind can (as it were, in parallel) be conceived to be affected by the same objects conceived under the attribute of thought (that is, as ideas). Ideas gained in this way, from encounters with external objects (conceived as ideas) constitutes knowledge of the first kind, or imagination, for Spinoza, and all such ideas are inadequate, or in other words, confused and lacking order for the intellect. Adequate ideas, on the other hand, which can be formed via Spinozas second and third kinds of knowledge (reason and intuitive knowledge, respectively), and which are clear and distinct and have order for the intellect, are not gained through chance encounters with external objects; rather, adequate ideas can be explained in terms of resources intrinsic to the mind. (For more on Spinozas three kinds of knowledge and the distinction between adequate and inadequate ideas, see Benedict de Spinoza: Epistemology.)

The mind, for Spinoza, just by virtue of having ideas, which is its essence, has ideas of what Spinoza calls common notions, or in other words, those things which are equally in the part and in the whole. Examples of common notions include motion and rest, extension, and indeed God. Take extension for example. To think of any body however small or however large is to have a perfectly complete idea of extension. So, insofar as the mind has any idea of body (and, for Spinoza, the human mind is the idea of the human body, and so always has ideas of body), it has a perfectly adequate idea of extension. The same can be said for motion and rest. The same can also be said for God, except that God is not equally in the part and in the whole of extension only, but of all things. Spinoza treats these common notions as principles of reasoning. Anything that can be deduced on their basis is also adequate.

It is not clear if Spinozas common notions should be considered innate ideas. Spinoza speaks of active and passive ideas, and adequate and inadequate ideas. He associates the former with the intellect and the latter with the imagination, but innate idea is not an explicit category in Spinozas theory of ideas as it is in Descartes and also Leibnizs. Common notions are not in the mind independent of the minds relation with its object (the body); nevertheless, since it is the minds nature to be the idea of the body, it is part of the minds nature to have common notions. Commentators differ over the question of whether Spinoza had a positive doctrine of innate ideas; it is clear, however, that he denied that all ideas come about through encounters with external objects; moreover, he believed that those ideas which do come about through encounters with external objects are of an inferior epistemic value than those produced through the minds own intrinsic resources; this is enough to put him in the rationalist camp on the question of the origin of ideas.

Of the three great rationalists, Leibniz propounded the most thoroughgoing doctrine of innate ideas. For Leibniz, all ideas are strictly speaking innate. In a general and relatively straightforward sense, this viewpoint is a direct consequence of Leibnizs conception of individual substance. According to Leibniz, each substance is a world apart, independent of everything outside of itself except for God. Thus all our phenomena, that is to say, all the things that can ever happen to us, are only the results of our own being (L, 312); or, in Leibnizs famous phrase from the Monadology, monads have no windows, meaning there is no way for sensory data to enter monads from the outside. In this more general sense, then, to give an explanation for Leibnizs doctrine of innate ideas would be to explain his conception of individual substance and the arguments and considerations which motivate it. (See Section 4, b, iii, below for a discussion of Leibnizs conception of substance; see also Gottfried Leibniz: Metaphysics.) This would be to circumvent the issues and questions which are typically at the heart of the debate over the existence of innate ideas, which concern the extent to which certain kinds of perceptions, ideas, and propositions can be accounted for on the basis of experience. Although Leibnizs more general reasons for embracing innate ideas stem from his unique brand of substance metaphysics, Leibniz does enter into the debate over innate ideas, as it were, addressing the more specific questions regarding the source of given kinds of ideas, most notably in his dialogic engagement with Lockes philosophy, New Essays on Human Understanding.

Due to Leibnizs conception of individual substance, nothing actually comes from a sensory experience, where a sensory experience is understood to involve direct concourse with things outside of the mind. However, Leibniz does have a means for distinguishing between sensations and purely intellectual thoughts within the framework of his substance metaphysics. For Leibniz, although each monad or individual substance expresses (or represents) the entire universe from its own unique point of view, it does so with a greater or lesser degree of clarity and distinctness. Bare monads, such as comprise minerals and vegetation, express the rest of the world only in the most confused fashion. Rational minds, by contrast, have a much greater proportion of clear and distinct perceptions, and so express more of the world clearly and distinctly than do bare monads. When an individual substance attains a more perfect expression of the world (in the sense that it attains a less confused expression of the world), it is said to act; when its expression becomes more confused, it is said to be acted upon. Using this distinction, Leibniz is able to reconcile the terms of his philosophy with everyday conceptions. Although, strictly speaking, no monad is acted upon by any other, nor acts upon any other directly, it is possible to speak this way, just as, Leibniz says, Copernicans can still speak of the motion of the sun for everyday purposes, while understanding that the sun does not in fact move. It is in this sense that Leibniz enters into the debate concerning the origin of our ideas.

Leibniz distinguishes between ideas (ides) and thoughts (penses) (or, sometimes, notions (notions) or concepts (conceptus)). Ideas exist in the soul whether we actually perceive them or are aware of them or not. It is these ideas that Leibniz contends are innate. Thoughts, by contrast is Leibnizs designation for ideas which we actually form or conceive at any given time. In this sense, thoughts can be formed on the basis of a sensory experience (with the above caveats regarding the meaning a sensory experience can have in Leibnizs thought) or on the basis of an internal experience, or a reflection. Leibniz alternatively characterizes our ideas as aptitudes, preformations, and as dispositions to represent something when the occasion for thinking of it arises. On multiple occasions, Leibniz uses the metaphor of the veins present in marble to illustrate his understanding of innate ideas. Just as the veins dispose the sculptor to shape the marble in certain ways, so do our ideas dispose us to have certain thoughts on the occasion of certain experiences.

Leibniz rejects the view that the mind cannot have ideas without being aware that it has them. (See Gottfried Leibniz: Philosophy of Mind.) Much of the disagreement between Locke and Leibniz on the question of innate ideas turns on this point, since Locke (at least as Leibniz represents him in the New Essays) is not able to make any sense out of the notion that the mind can have ideas without being aware of them. Much of Leibnizs defense of his innate ideas doctrine takes the form of replying to Lockes charge that it is absurd to hold that the mind could think (that is, have ideas) without being aware of it.

Leibniz marshals several considerations in support of his view that the mind is not always aware of its ideas. The fact that we can store many more ideas in our understanding than we can be aware of at any given time is one. Leibniz also points to the phenomenology of attention; we do not attend to everything in our perceptual field at any given time; rather we focus on certain things at the expense of others. To convey a sense of what it might be like for the mind to have perceptions and ideas in a dreamless sleep, Leibniz asks the reader to imagine subtracting our attention from perceptual experience; since we can distinguish between what is attended to and what is not, subtracting attention does not eliminate perception altogether.

While such considerations suggest the possibility of innate ideas, they do not in and of themselves prove that innate ideas are necessary to explain the full scope of human cognition. The empiricist tends to think that if innate ideas are not necessary to explain cognition, then they should be abandoned as gratuitous metaphysical constructs. Leibniz does have arguments designed to show that innate ideas are needed for a full account of human cognition.

In the first place, Leibniz recalls favorably the famous scenario from Platos Meno where Socrates teaches a slave boy to grasp abstract mathematical truths merely by asking questions. The anecdote is supposed to indicate that mathematical truths can be generated by the mind alone, in the absence of particular sensory experiences, if only the mind is prompted to discover what it contains within itself. Concerning mathematics and geometry, Leibniz remarks: one could construct these sciences in ones study and even with ones eyes closed, without learning from sight or even from touch any of the needed truths (NE, 77). So, on these grounds, Leibniz contends that without innate ideas, we could not explain the sorts of cognitive capacities exhibited in the mathematical sciences.

A second argument concerns our capacity to grasp certain necessary or eternal truths. Leibniz says that necessary truths can be suggested, justified, and confirmed by experience, but that they can be proved only by the understanding alone (NE, 80). Leibniz does not explain this point further, but he seems to have in mind the point later made by both Hume and Kant (to different ends), that experience on its own can never account for the kind of certainty that we find in mathematical and metaphysical truths. For Leibniz, if it can be granted that we can be certain of propositions in mathematics and metaphysics and Leibniz thinks this must be granted recourse must be had to principles innate to the mind in order to explain our ability to be certain of such things.

It is worth noting briefly the position of Nicolas Malebranche on innate ideas, since Malebranche is often considered among the rationalists, yet he denied the doctrine of innate ideas. Malebranches reasons for rejecting innate ideas were anything but empiricist in nature, however. His leading objection stems from the infinity of ideas that the mind is able to form independently of the senses; as an example, Malebranche cites the infinite number of triangles of which the mind could in principle, albeit not in practice, form ideas. Unlike Descartes and Leibniz, who view innate ideas as tendencies or dispositions to form certain thoughts under certain circumstances, Malebranche understands them as fully formed entities that would have to exist somehow in the mind were they to exist there innately. Given this conception, Malebranche finds it unlikely that God would have created so many things along with the mind of man (The Search After Truth, p. 227). Since God already contains the ideas of all things within Himself, Malebranche thinks that it would be much more economical if God were simply to reveal to us the ideas of things that already exist in him rather than placing an infinity of ideas in each human mind. Malebranches tenet that we see all things in God thus follows upon the principle that God always acts in the simplest ways. Malebranche finds further support for this doctrine from the fact that it places human minds in a position of complete dependence on God. Thus, if Malebranches rejection of innate ideas distinguishes him from other rationalists, it does so not from an empiricist standpoint, but rather because of the extent to which his position on ideas is theologically motivated.

In one sense, what it means to be a rationalist is to model philosophy on mathematics, and, in particular, geometry. This means that the rationalist begins with definitions and intuitively self-evident axioms and proceeds thence to deduce a philosophical system of knowledge that is both certain and complete. This at least is the goal and (with some qualifications to be explored below) the claim. In no work of rationalist philosophy is this procedure more apparent than in Spinozas Ethics, laid out famously in the geometrical manner (more geometrico). Nevertheless, Descartes main works (and those of Leibniz as well), although not as overtly more geometrico as Spinozas Ethics, are also modelled after geometry, and it is Descartes celebrated methodological program that first introduces mathematics as a model for philosophy.

Perhaps Descartes clearest and most well-known statement of mathematics role as paradigm appears in the Discourse on the Method:

Those long chains of very simple and easy reasonings, which geometers customarily use to arrive at their most difficult demonstrations, had given me occasion to suppose that all the things which can fall under human knowledge are interconnected in the same way. (CSM I, 120)

However, Descartes promotion of mathematics as a model for philosophy dates back to his early, unfinished work, Rules for the Direction of the Mind. It is in this work that Descartes first outlines his standards for certainty that have since come to be so closely associated with him and with the rationalist enterprise more generally.

In Rule 2, Descartes declares that henceforth only what is certain should be valued and counted as knowledge. This means the rejection of all merely probable reasoning, which Descartes associates with the philosophy of the Schools. Descartes admits that according to this criterion, only arithmetic and geometry thus far count as knowledge. But Descartes does not conclude that only in these disciplines is it possible to attain knowledge. According to Descartes, the reason that certainty has eluded philosophers has as much to do with the disdain that philosophers have for the simplest truths as it does with the subject matter. Admittedly, the objects of arithmetic and geometry are especially pure and simple, or, as Descartes will later say, clear and distinct. Nevertheless, certainty can be attained in philosophy as well, provided the right method is followed.

Descartes distinguishes between two ways of achieving knowledge: through experience and through deduction [] [W]e must note that while our experiences of things are often deceptive, the deduction or pure inference of one thing from another can never be performed wrongly by an intellect which is in the least degree rational [] (CSM I, 12). This is a clear statement of Descartes methodological rationalism. Building up knowledge through accumulated experience can only ever lead to the sort of probable knowledge that Descartes finds lacking. Pure inference, by contrast, can never go astray, at least when it is conducted by right reason. Of course, the truth value of a deductive chain is only as good as the first truths, or axioms, whose truth the deductions preserve. It is for this reason that Descartes method relies on intuition as well as deduction. Intuition provides the first principles of a deductive system, for Descartes. Intuition differs from deduction insofar as it is not discursive. Intuition grasps its object in an immediate way. In its broadest outlines, Descartes method is just the use of intuition and deduction in the orderly attainment and preservation of certainty.

In subsequent Rules, Descartes goes on to elaborate a more specific methodological program, which involves reducing complicated matters step by step to simpler, intuitively graspable truths, and then using those simple truths as principles from which to deduce knowledge of more complicated matters. It is generally accepted by scholars that this more specific methodological program reappears in a more iconic form in the Discourse on the Method as the four rules for gaining knowledge outlined in Part 2. There is some doubt as to the extent to which this more specific methodological program actually plays any role in Descartes mature philosophy as it is expressed in the Meditations and Principles (see Garber 2001, chapter 2). There can be no doubt, however, that the broader methodological guidelines outlined above were a permanent feature of Descartes thought.

In response to a request to cast his Meditations in the geometrical style (that is, in the style of Euclids Elements), Descartes distinguishes between two aspects of the geometrical style: order and method, explaining:

The order consists simply in this. The items which are put forward first must be known entirely without the aid of what comes later; and the remaining items must be arranged in such a way that their demonstration depends solely on what has gone before. I did try to follow this order very carefully in my Meditations [] (CSM II, 110)

Elsewhere, Descartes contrasts this order, which he calls the order of reasons, with another order, which he associates with scholasticism, and which he calls the order of subject-matter (see CSMK III, 163). What Descartes understands as geometrical order or the order of reasons is just the procedure of starting with what is most simple, and proceeding in a step-wise, deliberate fashion to deduce consequences from there. Descartes order is governed by what can be clearly and distinctly intuited, and by what can be clearly and distinctly inferred from such self-evident intuitions (rather than by a concern for organizing the discussion into neat topical categories per the order of subject-matter)

As for method, Descartes distinguishes between analysis and synthesis. For Descartes, analysis and synthesis represent different methods of demonstrating a conclusion or set of conclusions. Analysis exhibits the path by which the conclusion comes to be grasped. As such, it can be thought of as the order of discovery or order of knowledge. Synthesis, by contrast, wherein conclusions are deduced from a series of definitions, postulates, and axioms, as in Euclids Elements, for instance, follows not the order in which things are discovered, but rather the order that things bear to one another in reality. As such, it can be thought of as the order of being. God, for example, is prior to the human mind in the order of being (since God created the human mind), and so in the synthetic mode of demonstration the existence of God is demonstrated before the existence of the human mind. However, knowledge of ones own mind precedes knowledge of God, at least in Descartes philosophy, and so in the analytic mode of demonstration the cogito is demonstrated before the existence of God. Descartes preference is for analysis, because he thinks that it is superior in helping the reader to discover the things for herself, and so in bringing about the intellectual conversion which it is the Meditations goal to effectuate in the minds of its readers. According to Descartes, while synthesis, in laying out demonstrations systematically, is useful in preempting dissent, it is inferior in engaging the mind of the reader.

Two primary distinctions can be made in summarizing Descartes methodology: (1) the distinction between the order of reasons and the order of subject-matter; and (2) the analysis/synthesis distinction. With respect to the first distinction, the great Continental rationalists are united. All adhere to the order of reasons, as we have described it above, rather than the order of subject-matter. Even though the rationalists disagree about how exactly to interpret the content of the order of reasons, their common commitment to following an order of reasons is a hallmark of their rationalism. Although there are points of convergence with respect to the second, analysis/synthesis distinction, there are also clear points of divergence, and this distinction can be useful in highlighting the range of approaches the rationalists adopt to mathematical methodology.

Of the great Continental rationalists, Spinoza is the most closely associated with mathematical method due to the striking presentation of his magnum opus, the Ethics, (as well as his presentation of Descartes Principles), in geometrical fashion. The fact that Spinoza is the only major rationalist to present his main work more geometrico might create the impression that he is the only philosopher to employ mathematical method in constructing and elaborating his philosophical system. This impression is mistaken, since both Descartes and Leibniz also apply mathematical method to philosophy. Nevertheless, there are differences between Spinozas employment of mathematical method and that of Descartes (and Leibniz). The most striking, of course, is the form of Spinozas Ethics. Each part begins with a series of definitions, axioms, and postulates and proceeds thence to deduce propositions, the demonstrations of which refer back to the definitions, axioms, postulates and previously demonstrated propositions on which they depend. Of course, this is just the method of presenting findings that Descartes in the Second Replies dubbed synthesis. For Descartes, analysis and synthesis differ only in pedagogical respects: whereas analysis is better for helping the reader discover the truth for herself, synthesis is better in compelling agreement.

There is some evidence that Spinozas motivations for employing synthesis were in part pedagogical. In Lodewijk Meyers preface to Spinozas Principles of Cartesian Philosophy, Meyer uses Descartes Second Replies distinction between analysis and synthesis to explain the motivation for the work. Meyer criticizes Descartes followers for being too uncritical in their enthusiasm for Descartes thought, and attributes this in part to the relative opacity of Descartes analytic mode of presentation. Thus, for Meyer, the motivation for presenting Descartes Principles in the synthetic manner is to make the proofs more transparent, and thereby leave less excuse for blind acceptance of Descartes conclusions. It is not clear to what extent Meyers explanation of the mode of presentation of Spinozas Principles of Cartesian Philosophy applies to Spinozas Ethics. In the first place, although Spinoza approved the preface, he did not author it himself. Secondly, while such an explanation seems especially suited to a work in which Spinozas chief goal was to present another philosophers thought in a different form, there is no reason to assume that it applies to the presentation of Spinozas own philosophy. Scholars have differed on how to interpret the geometrical form of Spinozas Ethics. However, it is generally accepted that Spinozas use of synthesis does not merely represent a pedagogical preference. There is reason to think that Spinozas methodology differs from that of Descartes in a somewhat deeper way.

There is another version of the analysis/synthesis distinction besides Descartes that was also influential in the 17th century, that is, Hobbes version of the distinction. Although there is little direct evidence that Spinoza was influenced by Hobbes version of the distinction, some scholars have claimed a connection, and, in any case, it is useful to view Spinozas methodology in light of the Hobbesian alternative.

Synthesis and analysis are not modes of demonstrating findings that have already been made, for Hobbes, as they are for Descartes, but rather complementary means of generating findings; in particular, they are forms of causal reasoning. For Hobbes, analysis is reasoning from effects to causes; synthesis is reasoning in the other direction, from causes to effects. For example, by analysis, we infer that geometrical objects are constructed via the motions of points and lines and surfaces. Once motion has been established as the principle of geometry, it is then possible, via synthesis, to construct the possible effects of motion, and thereby, to make new discoveries in geometry. According to the Hobbesian schema, then, synthesis is not merely a mode of presenting truths, but a means of generating and discovering truths. (For Hobbes method, see The English Works of Thomas Hobbes of Malmesbury, vol. 1, ch. 6.) There is reason to think that synthesis had this kind of significance for Spinoza, as well as a means of discovery, not merely presentation. Spinozas methodology, and, in particular, his theory of definitions, bear this out

Spinozas method begins with reflection on the nature of a given true idea. The given true idea serves as a standard by which the mind learns the distinction between true and false ideas, and also between the intellect and the imagination, and how to direct itself properly in the discovery of true ideas. The correct formulation of definitions emerges as the most important factor in directing the mind properly in the discovery of true ideas. To illustrate his conception of a good definition, Spinoza contrasts two definitions of a circle. On one definition, a circle is a figure in which all the lines from the center to the circumference are equal. On another, a circle is the figure described by the rotation of a line around one of its ends, which is fixed. For Spinoza, the second definition is superior. Whereas the first definition gives only a property of the circle, the second provides the cause from which all of the properties can be deduced. Hence, what makes a definition a good definition, for Spinoza, is its capacity to serve as a basis for the discovery of truths about the thing. The circle, of course, is just an example. For Spinoza, the method is perfected when it arrives at a true idea of the first cause of all things, that is, God. Only the method is perfected with a true idea of God, however, not the philosophy. The philosophy itself begins with a true idea of God, since the philosophy consists in deducing the consequences from a true idea of God. With this in mind, the definition of God is of paramount importance. In correspondence, Spinoza compares contrasting definitions of God, explaining that he chose the one which expresses the efficient cause from which all of the properties of God can be deduced.

In this light, it becomes clear that the geometrical presentation of Spinozas philosophy is not merely a pedagogic preference. The definitions that appear at the outset of the five parts of the Ethics do not serve merely to make explicit what might otherwise have remained only implicit in Descartes analytic mode of presentation. Rather, key definitions, such as the definition of God, are principles that underwrite the development of the system. As a result, Hobbes conception of the analysis/synthesis distinction throws an important light on Spinozas procedure. There is a movement of analysis in arriving at the causal definition of God from the preliminary given true idea. Then there is a movement of synthesis in deducing consequences from that causal definition. Of course, Descartes analysis/synthesis distinction still applies, since, after all, Spinozas system is presented in the synthetic manner in the Ethics. But the geometrical style of presentation is not merely a pedagogical device in Spinozas case. It is also a clue to the nature of his system.

Leibniz is openly critical of Descartes distinction between analysis and synthesis, writing, Those who think that the analytic presentation consists in revealing the origin of a discovery, the synthetic in keeping it concealed, are in error (L, 233). This comment is aimed at Descartes formulation of the distinction in the Second Replies. Leibniz is explicit about his adherence to the viewpoint that seems to be implied by Spinozas methodology: synthesis is itself a means of discovering truth no less than analysis, not merely a mode of presentation. Leibnizs understanding of analysis and synthesis is closer to the Hobbesian conception, which views analysis and synthesis as different directions of causal reasoning: from effects to causes (analysis) and from causes to effects (synthesis). Leibniz formulates the distinction in his own terms as follows:

Synthesis is achieved when we begin from principles and run through truths in good order, thus discovering certain progressions and setting up tables, or sometimes general formulas, in which the answers to emerging questions can later be discovered. Analysis goes back to the principles in order to solve the given problems only [] (L, 232)

Leibniz thus conceives synthesis and analysis in relation to principles.

Leibniz lays great stress on the importance of establishing the possibility of ideas, that is to say, establishing that ideas do not involve contradiction, and this applies a fortiori to first principles. For Leibniz, the Cartesian criterion of clear and distinct perception does not suffice for establishing the possibility of an idea. Leibniz is critical, in particular, of Descartes ontological argument on the grounds that Descartes neglects to demonstrate the possibility of the idea of a most perfect being on which the argument depends. It is possible to mistakenly assume that an idea is possible, when in reality it is contradictory. Leibniz gives the example of a wheel turning at the fastest possible rate. It might at first seem that this idea is legitimate, but if a spoke of the wheel were extended beyond the rim, the end of the spoke would move faster than a nail in the rim itself, revealing a contradiction in the original notion.

For Leibniz, there are two ways of establishing the possibility of an idea: by experience (a posteriori) and by reducing concepts via analysis down to a relation of identity (a priori). Leibniz credits mathematicians and geometers with pushing the practice of demonstrating what would otherwise normally be taken for granted the furthest. For example, in Meditations on Knowledge, Truth, and Ideas, Leibniz writes, That brilliant genius Pascal agrees entirely with these principles when he says, in his famous dissertation on the geometrical spirit [] that it is the task of the geometer to define all terms though ever so little obscure and to prove all truths though little doubtful (L, 294). Leibniz credits his own doctrine of the possibility of ideas with clarifying exactly what it means for something to be beyond doubt and obscurity.

Leibniz describes the result of the reduction of concepts to identity variously as follows: when the thing is resolved into simple primitive notions understood in themselves (L, 231); when every ingredient that enters into a distinct concept is itself known distinctly; when analysis is carried through to the end (L, 292). Since, for Leibniz, all true ideas can be reduced to simple identities, it is, in principle, possible to derive all truths via a movement of synthesis from such simple identities in the way that mathematicians produce systems of knowledge on the basis of their basic definitions and axioms. This kind of a priori knowledge of the world is restricted to God, however. According to Leibniz, it is only possible for our finite minds to have this kind of knowledge which Leibniz calls intuitive or adequate in the case of things which do not depend on experience, or what Leibniz also calls truths of reason, which include abstract logical and metaphysical truths, and mathematical propositions. In the case of truths of fact, by contrast, with the exception of immediately graspable facts of experience, such as, I think, and Various things are thought by me, we are restricted to formulating hypotheses to explain the phenomena of sensory experience, and such knowledge of the world can, for us, only ever achieve the status of hypothesis, though our hypothetical knowledge can be continually improved and refined. (See Section 5, c, below for a discussion of hypotheses in Leibniz.)

Leibniz is in line with his rationalist predecessors in emphasizing the importance of proper order in philosophizing. Leibnizs emphasis on establishing the possibility of ideas prior to using them in demonstrating propositions could be understood as a refinement of the geometrical order that Descartes established over against the order of subject-matter. Leibniz emphasizes order in another connection vis--vis Locke. As Leibniz makes clear in his New Essays, one of the clearest points of disagreement between him and Locke is on the question of innate ideas. In preliminary comments that Leibniz drew up upon first reading Lockes Essay, and which he sent to Locke via Burnett, Leibniz makes the following point regarding philosophical order:

Concerning the question whether there are ideas and truths born with us, I do not find it absolutely necessary for the beginnings, nor for the practice of the art of thinking, to answer it; whether they all come to us from outside, or they come from within us, we will reason correctly provided that we keep in mind what I said above, and that we proceed with order and without prejudice. The question of the origin of our ideas and of our maxims is not preliminary in philosophy, and it is necessary to have made great progress in order to resolve it. (Philosophische Schriften, vol. 5, pp. 15-16)

Leibnizs allusion to what he said above refers to remarks regarding the establishment of the possibility of ideas via experience and the principle of identity. This passage makes it clear that, from Leibnizs point of view, the order in which Locke philosophizes is quite misguided, since Locke begins with a question that should only be addressed after great progress has already been made, particularly with respect to the criteria for distinguishing between true and false ideas, and for establishing legitimate philosophical principles. Empiricists generally put much less emphasis on the order of philosophizing, since they do not aim to reason from first principles grasped a priori.

A fundamental tenet of rationalism perhaps the fundamental tenet is that the world is intelligible. The intelligibility tenet means that everything that happens in the world happens in an orderly, lawful, rational manner, and that the mind, in principle, if not always in practice, is able to reproduce the interconnections of things in thought provided that it adheres to certain rules of right reasoning. The intelligibility of the world is sometimes couched in terms of a denial of brute facts, where a brute fact is something that just is the case, that is, something that obtains without any reason or explanation (even in principle). Many of the a priori principles associated with rationalism can be understood either as versions or implications of the principle of intelligibility. As such, the principle of intelligibility functions as a basic principle of rationalism. It appears under various guises in the great rationalist systems and is used to generate contrasting philosophical systems. Indeed, one of the chief criticisms of rationalism is the fact that its principles can consistently be used to generate contradictory conclusions and systems of thought. The clearest and best known statement of the intelligibility of the world is Leibnizs principle of sufficient reason. Some scholars have recently emphasized this principle as the key to understanding rationalism (see Della Rocca 2008, chapter 1).

The intelligibility principle raises some classic philosophical problems. Chief among these is a problem of question-begging or circularity. The task of proving that the world is intelligible seems to have to rely on some of the very principles of reasoning in question. In the 17th century, discussion of this fundamental problem centered around the so-called Cartesian circle. The problem is still debated by scholars of 17th century thought today. The viability of the rationalist enterprise seems to depend, at least in part, on a satisfactory answer to this problem.

The most important rational principle in Descartes philosophy, the principle which does a great deal of the work in generating its details, is the principle according to which whatever is clearly and distinctly perceived to be true is true. This principle means that if we can form any clear and distinct ideas, then we will be able to trust that they accurately represent their objects, and give us certain knowledge of reality. Descartes clear and distinct ideas doctrine is central to his conception of the worlds intelligibility, and indeed, it is central to the rationalists conception of the worlds intelligibility more broadly. Although Spinoza and Leibniz both work to refine understanding of what it is to have clear and distinct ideas, they both subscribe to the view that the mind, when directed properly, is able to accurately represent certain basic features of reality, such as the nature of substance.

For Descartes, it cannot be taken for granted from the outset that what we clearly and distinctly perceive to be true is in fact true. It is possible to entertain the doubt that an all-powerful deceiving being fashioned the mind so that it is deceived even in those things it perceives clearly and distinctly. Nevertheless, it is only possible to entertain this doubt when we are not having clear and distinct perceptions. When we are perceiving things clearly and distinctly, their truth is undeniable. Moreover, we can use our capacity for clear and distinct perceptions to demonstrate that the mind was not fashioned by an all-powerful deceiving being, but rather by an all-powerful benevolent being who would not fashion us so as to be deceived even when using our minds properly. Having proved the existence of an all-powerful benevolent being qua creator of our minds, we can no longer entertain any doubts regarding our clear and distinct ideas even when we are not presently engaged in clear and distinct perceptions.

Descartes legitimation of clear and distinct perception via his proof of a benevolent God raises notorious interpretive challenges. Scholars disagree about how to resolve the problem of the Cartesian circle. However, there is general consensus that Descartes procedure is not, in fact, guilty of vicious, logical circularity. In order for Descartes procedure to avoid circularity, it is generally agreed that in some sense clear and distinct ideas need already to be legitimate before the proof of Gods existence. It is only in another sense that Gods existence legitimates their truth. Scholars disagree on how exactly to understand those different senses, but they generally agree that there is some sense at least in which clear and distinct ideas are self-legitimating, or, otherwise, not in need of legitimation.

That some ideas provide a basic standard of truth is a fundamental tenet of rationalism, and undergirds all the other rationalist principles at work in the construction of rationalist systems of philosophy. For the rationalists, if it cannot be taken for granted in at least some sense from the outset that the mind is capable of discerning the difference between truth and falsehood, then one never gets beyond skepticism.

The Continental rationalists deploy the principle of intelligibility and subordinate rational principles derived from it in generating much of the content of their respective philosophical systems. In no aspect of their systems is the application of rational principles to the generation of philosophical content more evident and more clearly illustrative of contrasting interpretations of these principles than in that for which the Continental rationalists are arguably best known: substance metaphysics.

Descartes deploys his clear and distinct ideas doctrine in justifying his most well-known metaphysical position: substance dualism. The first step in Descartes demonstration of mind-body dualism, or, in his terminology, of a real distinction (that is, a distinction between two substances) between mind and body is to show that while it is possible to doubt that one has a body, it is not possible to doubt that one is thinking. As Descartes makes clear in the Principles of Philosophy, one of the chief upshots of his famous cogito argument is the discovery of the distinction between a thinking thing and a corporeal thing. The impossibility of doubting ones existence is not the impossibility of doubting that one is a human being with a body with arms and legs and a head. It is the impossibility of doubting, rather, that one doubts, perceives, dreams, imagines, understands, wills, denies, and other modalities that Descartes attributes to the thinking thing. It is possible to think of oneself as a thing that thinks, and to recognize that it is impossible to doubt that one thinks, while continuing to doubt that one has a body with arms and legs and a head. So, the cogito drives a preliminary wedge between mind and body.

At this stage of the argument, however, Descartes has simply established that it is possible to conceive of himself as a thinking thing without conceiving of himself as a corporeal thing. It remains possible that, in fact, the thinking thing is identical with a corporeal thing, in other words, that thought is somehow something a body can do; Descartes has yet to establish that the epistemological distinction between his knowledge of his mind and his knowledge of body that results from the hyperbolic doubt translates to a metaphysical or ontological distinction between mind and body. The move from the epistemological distinction to the ontological distinction proceeds via the doctrine of clear and distinct ideas. Having established that whatever he clearly and distinctly perceives is true, Descartes is in a position to affirm the real distinction between mind and body.

In this life, it is never possible to clearly and distinctly perceive a mind actually separate from a body, at least in the case of finite, created minds, because minds and bodies are intimately unified in the composite human being. So Descartes cannot base his proof for the real distinction of mind and body on the clear and distinct perception that mind and body are in fact independently existing things. Rather, Descartes argument is based on the joint claims that (1) it is possible to have a clear and distinct idea of thought apart from extension and vice versa; and (2) whatever we can clearly and distinctly understand is capable of being created by God exactly as we clearly and distinctly understand it. Thus, the fact that we can clearly and distinctly understand thought apart from extension and vice versa entails that thinking things and extended things are really distinct (in the sense that they are distinct substances separable by God).

The foregoing argument relies on certain background assumptions which it is now necessary to explain, in particular, Descartes conception of substance. In the Principles, Descartes defines substance as a thing which exists in such a way as to depend on no other thing for its existence (CSM I, 210). Properly speaking, only God can be understood to depend on no other thing, and so only God is a substance in the absolute sense. Nevertheless, Descartes allows that, in a relative sense, created things can count as substances too. A created thing is a substance if the only thing it relies upon for its existence is the ordinary concurrence of God (ibid.). Only mind and body qualify as substances in this secondary sense. Everything else is a modification or property of minds and bodies. A second point is that, for Descartes, we do not have a direct knowledge of substance; rather, we come to know substance by virtue of its attributes. Thought and extension are the attributes or properties in virtue of which we come to know thinking and corporeal substance, or mind and body. This point relies on the application of a key rational principle, to wit, nothingness has no properties. For Descartes, there cannot simply be the properties of thinking and extension without these properties having something in which to inhere. Thinking and extension are not just any properties; Descartes calls them principal attributes because they constitute the nature of their respective substances. Other, non-essential properties, cannot be understood without the principal attribute, but the principal attribute can be understood without any of the non-essential properties. For example, motion cannot be understood without extension, but extension can be understood without motion.

Descartes conception of mind and body as distinct substances includes some interesting corollaries which result from a characteristic application of rational principles and account for some characteristic doctrinal differences between Descartes and empiricist philosophers. One consequence of Descartes conception of the mind as a substance whose principal attribute is thought is that the mind must always be thinking. Since, for Descartes, thinking is something of which the thinker is necessarily aware, Descartes commitment to thought as an essential, and therefore, inseparable, property of the mind raises some awkward difficulties. Arnauld, for example, raises one such difficulty in his Objections to Descartes Meditations: presumably there is much going on in the mind of an infant in its mothers womb of which the infant is not aware. In response to this objection, and also in response to another obvious problem, that is, that of dreamless sleep, Descartes insists on a distinction between being aware of or conscious of our thoughts at the time we are thinking them, and remembering them afterwards (CSMK III, 357). The infant is, in fact, aware of its thinking in the mothers womb, but it is aware only of very confused sensory thoughts of pain and pleasure and heat (not, as Descartes points out, metaphysical matters (CSMK III, 189)) which it does not remember afterwards. Similarly, the mind is always thinking even in the most dreamless sleep, it is just that the mind often immediately forgets much of what it had been aware.

Descartes commitment to embracing the implications however counter-intuitive of his substance-attribute metaphysics, puts him at odds with, for instance, Locke, who mocks the Cartesian doctrine of the always-thinking soul in his An Essay Concerning Human Understanding. For Locke, the question whether the soul is always thinking or not must be decided by experience and not, as Locke says, merely by hypothesis (An Essay Concerning Human Understanding, Book II, Chapter 1). The evidence of dreamless sleep makes it obvious, for Locke, that the soul is not always thinking. Because Locke ties personal identity to memory, if the soul were to think while asleep without knowing it, the sleeping man and the waking man would be two different persons.

Descartes commitment to the always-thinking mind is a consequence of his commitment to a more basic rational principle. In establishing his conception of thinking substance, Descartes reasons from the attribute of thinking to the substance of thinking on the grounds that nothing has no properties. In this case, he reasons in the other direction, from the substance of thinking, that is, the mind, to the property of thinking on the converse grounds that something must have properties, and the properties it must have are the properties that make it what it is; in the case of the mind, that property is thought. (Leibniz found a way to maintain the integrity of the rational principle without contradicting experience: admit that thinking need not be conscious. This way the mind can still think in a dreamless sleep, and so avoid being without any properties, without any problem about the recollection of awareness.)

Another consequence of Descartes substance metaphysics concerns corporeal substance. For Descartes, we do not know corporeal substance directly, but rather through a grasp of its principal attribute, extension. Extension qua property requires a substance in which to inhere because of the rational principle, nothing has no properties. This rational principle leads to another characteristic Cartesian position regarding the material world: the denial of a vacuum. Descartes denies that space can be empty or void. Space has the property of being extended in length, breadth, and depth, and such properties require a substance in which to inhere. Thus, nothing, that is, a void or vacuum, is not able to have such properties because of the rational principle, nothing has no properties. This means that all space is filled with substance, even if it is imperceptible. Once again, Descartes answers a debated philosophical question on the basis of a rational principle.

If Descartes is known for his dualism, Spinoza, of course, is known for monism the doctrine that there is only one substance. Spinozas argument for substance monism (laid out in the first fifteen propositions of the Ethics) has no essential basis in sensory experience; it proceeds through rational argumentation and the deployment of rational principles; although Spinoza provides one a posteriori argument for Gods existence, he makes clear that he presents it only because it is easier to grasp than the a priori arguments, and not because it is in any way necessary.

The crucial step in the argument for substance monism comes in Ethics 1p5: In Nature there cannot be two or more substances of the same nature or attribute. It is at this proposition that Descartes (and Leibniz, and many others) would part ways with Spinoza. The most striking and controversial implication of this proposition, at least from a Cartesian perspective, is that human minds cannot qualify as substances, since human minds all share the same nature or attribute, that is, thought. In Spinozas philosophy, human minds are actually themselves properties Spinoza calls them modes of a more basic, infinite substance.

The argument for 1p5 works as follows. If there were two or more distinct substances, there would have to be some way to distinguish between them. There are two possible distinctions to be made: either by a difference in their affections or by a difference in their attributes. For Spinoza, a substance is something which exists in itself and can be conceived through itself; an attribute is what the intellect perceives of a substance, as constituting its essence (Ethics 1d4). Spinozas conception of attributes is a matter of longstanding scholarly debate, but for present purposes, we can think of it along Cartesian lines. For Descartes, substance is always grasped through a principal property, which is the nature or essence of the substance. Spinoza agrees that an attribute is that through which the mind conceives the nature or essence of substance. With this in mind, if a distinction between two substances were to be made on the basis of a difference in attributes, then there would not be two substances of the same attribute as the proposition indicates. This means that if there were two substances of the same attribute, it would be necessary to distinguish between them on the basis of a difference in modes or affections.

Spinoza conceives of an affection or mode as something which exists in another and needs to be conceived through another. Given this conception of affections, it is impossible, for Spinoza, to distinguish between two substances on the basis of a difference in affections. Doing so would be somewhat akin to affirming that there are two apples on the basis of a difference between two colors, when one apple can quite possibly have a red part and a green part. As color differences do not per se determine differences between apples, in a similar way, modal differences cannot determine a difference between substances you could just be dealing with one substance bearing multiple different affections. It is notable that in 1p5, Spinoza uses virtually the same substance-attribute schema as Descartes to deny a fundamental feature of Descartes system.

Having established 1p5, the next major step in Spinozas argument for substance monism is to establish the necessary existence and infinity of substance. For Spinoza, if things have nothing in common with each other, one cannot be the cause of the other. This thesis depends upon assumptions that lie at the heart of Spinozas rationalism. Something that has nothing in common with another thing cannot be the cause of the other thing because things that have nothing in common with one another cannot be understood through one another (Ethics 1a5). But, for Spinoza, effects should be able to be understood through causes. Indeed, what it is to understand something, for Spinoza, is to understand its cause. The order of knowledge, provided that the knowledge is genuine, or, as Spinoza says, adequate, must map onto the order of being, and vice versa. Thus, Spinozas claim that if things have nothing in common with one another, one cannot be the cause of the other, is an expression of Spinozas fundamental, rationalist commitment to the intelligibility of the world. Given this assumption, and given the fact that no two substances have anything in common with one another, since no two substances share the same nature or attribute, it follows that if a substance is to exist, it must exist as causa sui (self-caused); in other words, it must pertain to the essence of substance to exist. Moreover, Spinoza thinks that since there is nothing that has anything in common with a given substance, there is therefore nothing to limit the nature of a given substance, and so every substance will necessarily be infinite. This assertion depends on another deep-seated assumption of Spinozas philosophy: nothing limits itself, but everything by virtue of its very nature affirms its own nature and existence as much as possible.

At this stage, Spinoza has argued that substances of a single attribute exist necessarily and are necessarily infinite. The last major stage of the argument for substance monism is the transition from multiple substances of a single attribute to only one substance of infinite attributes. Scholars have expressed varying degrees of satisfaction with the lucidity of this transition. It seems to work as follows. It is possible to attribute many attributes to one substance. The more reality or being each thing has, the more attributes belong to it. Therefore, an absolutely infinite being is a being that consists of infinite attributes. Spinoza calls an absolutely infinite being or substance consisting of infinite attributes God. Spinoza gives four distinct arguments for Gods existence in Ethics 1p11. The first is commonly interpreted as Spinozas version of an ontological argument. It refers back to 1p7 where Spinoza proved that it pertains to the essence of substance to exist. The second argument is relevant to present purposes, since it turns on Spinozas version of the principle of sufficient reason: For each thing there must be assigned a cause, or reason, both for its existence and for its nonexistence (Ethics 1p11dem). But there can be no reason for Gods nonexistence for the same reasons that all substances are necessarily infinite: there is nothing outside of God that is able to limit Him, and nothing limits itself. Once again, Spinozas argument rests upon his assumption that things by nature affirm their own existence. The third argument is a posteriori, and the fourth pivots like the second on the assumption that things by nature affirm their own existence.

Having proven that a being consisting of infinite attributes exists, Spinozas argument for substance monism is nearly complete. It remains only to point out that no substance besides God can exist, because if it did, it would have to share at least one of Gods infinite attributes, which, by 1p5, is impossible. Everything that exists, then, is either an attribute or an affection of God.

Leibnizs universe consists of an infinity of monads or simple substances, and God. For Leibniz, the universe must be composed of monads or simple substances. His justification for this claim is relatively straightforward. There must be simples, because there are compounds, and compounds are just collections of simples. To be simple, for Leibniz, means to be without parts, and thus to be indivisible. For Leibniz, the simples or monads are the true atoms of nature (L, 643). However, material atoms are contrary to reason (L, 456). Manifold a priori considerations lead Leibniz to reject material atoms. In the first place, the notion of a material atom is contradictory in Leibnizs view. Matter is extended, and that which is extended is divisible into parts. The very notion of an atom, however, is the notion of something indivisible, lacking parts.

From a different perspective, Leibnizs dynamical investigations provide another argument against material atoms. Absolute rigidity is included in the notion of a material atom, since any elasticity in the atom could only be accounted for on the basis of parts within the atom shifting their position with respect to each other, which is contrary to the notion of a partless atom. According to Leibnizs analysis of impact, however, absolute rigidity is shown not to make sense. Consider the rebound of one atom as a result of its collision with another. If the atoms were absolutely rigid, the change in motion resulting from the collision would have to happen instantaneously, or, as Leibniz says, through a leap or in a moment (L, 446). The atom would change from initial motion to rest to rebounded motion without passing through any intermediary degrees of motion. Since the body must pass through all the intermediary degrees of motion in transitioning from one state of motion to another, it must not be absolutely rigid, but rather elastic; the analysis of the parts of the body must, in correlation with the degree of motion, proceed to infinity. Leibnizs dynamical argument against material atoms turns on what he calls the law of continuity, an a priori principle according to which no change occurs through a leap.

The true unities, or true atoms of nature, therefore, cannot be material; they must be spiritual or metaphysical substances akin to souls. Since Leibnizs spiritual substances, or monads, are absolutely simple, without parts, they admit neither of dissolution nor composition. Moreover, there can be no interaction between monads, monads cannot receive impressions or undergo alterations by means of being affected from the outside, since, in Leibnizs famous phrase from the Monadology, monads have no windows (L, 643). Monads must, however, have qualities, otherwise there would be no way to explain the changes we see in things and the diversity of nature. Indeed, following from Leibnizs principle of the identity of indiscernibles, no two monads can be exactly alike, since each monad stands in a unique relation to the rest, and, for Leibniz, each monads relation to the rest is a distinctive feature of its nature. The way in which, for Leibniz, monads can have qualities while remaining simple, or in other words, the only way there can be multitude in simplicity is if monads are characterized and distinguished by means of their perceptions. Leibnizs universe, in summary, consists in monads, simple spiritual substances, characterized and distinguished from one another by a unique series of perceptions determined by each monads unique relationship vis--vis the others.

Of the great rationalists, Leibniz is the most explicit about the principles of reasoning that govern his thought. Leibniz singles out two, in particular, as the most fundamental rational principles of his philosophy: the principle of contradiction and the principle of sufficient reason. According to the principle of contradiction, whatever involves a contradiction is false. According to the principle of sufficient reason, there is no fact or true proposition without there being a sufficient reason for its being so and not otherwise (L, 646). Corresponding to these two principles of reasoning are two kinds of truths: truths of reasoning and truths of fact. For Leibniz, truths of reasoning are necessary, and their opposite is impossible. Truths of fact, by contrast, are contingent, and their opposite is possible. Truths of reasoning are by most commentators associated with the principle of contradiction because they can be reduced via analysis to a relation between two primitive ideas, whose identity is intuitively evident. Thus, it is possible to grasp why it is impossible for truths of reasoning to be otherwise. However, this kind of resolution is only possible in the case of abstract propositions, such as the propositions of mathematics (see Section 3, c, above). Contingent truths, or truths of fact, by contrast, such as Caesar crossed the Rubicon, to use the example Leibniz gives in the Discourse on Metaphysics, are infinitely complicated. Although, for Leibniz, every predicate is contained in its subject, to reduce the relationship between Caesars notion and his action of crossing the Rubicon would require an infinite analysis impossible for finite minds. Caesar crossed the Rubicon is a contingent proposition, because there is another possible world in which Caesar did not cross the Rubicon. To understand the reason for Caesars crossing, then, entails understanding why this world exists rather than any other possible world. It is for this reason that contingent truths are associated with the principle of sufficient reason. Although the opposite of truths of fact is possible, there is nevertheless a sufficient reason why the fact is so and not otherwise, even though this reason cannot be known by finite minds.

Truths of fact, then, need to be explained; there must be a sufficient reason for them. However, according to Leibniz, a sufficient reason for existence cannot be found merely in any one individual thing or even in the whole aggregate and series of things (L, 486). That is to say, the sufficient reason for any given contingent fact cannot be found within the world of which it is a part. The sufficient reason must explain why this world exists rather than another possible world, and this reason must lie outside the world itself. For Leibniz, the ultimate reason for things must be contained in a necessary substance that creates the world, that is, God. But if the existence of God is to ground the series of contingent facts that make up the world, there must be a sufficient reason why God created this world rather than any of the other infinite possible worlds contained in his understanding. As a perfect being, God would only have chosen to bring this world into existence rather than any other because it is the best of all possible worlds. Gods choice, therefore, is governed by the principle of fitness, or what Leibniz also calls the principle of the best (L, 647). The best world, according to Leibniz, is the one which maximizes perfection; and the most perfect world is the one which balances the greatest possible variety with the greatest possible order. God achieves maximal perfection in the world through what Leibniz calls the pre-established harmony. Although the world is made up of an infinity of monads with no direct interaction with one another, God harmonizes the perceptions of each monad with the perceptions of every other monad, such that each monad represents a unique perspective on the rest of the universe according to its position vis--vis the others.

According to Leibnizs philosophy, in the case of all true propositions, the predicate is contained in the subject. This is often known as the predicate-in-notion principle. The relationship between predicate and subject can only be reduced to an identity relation in the case of truths of reason, whereas in the case of truths of fact, the reduction requires an infinite analysis. Nevertheless, in both cases, it is possible in principle (which is to say, for an infinite intellect) to know everything that will ever happen to an individual substance, and even everything that will happen in the world of an individual substance on the basis of an examination of the individual substances notion, since each substance is an expression of the entire world. Leibnizs predicate-in-notion principle therefore unifies both of his two great principles of reasoning the principle of contradiction and the principle of sufficient reason since the relation between predicate and subject is either such that it is impossible for it to be otherwise or such that there is a sufficient reason why it is as it is and not otherwise. Moreover, it represents a particularly robust expression of the principle of intelligibility at the very heart of Leibnizs system. There is a reason why everything is as it is, whether that reason is subject to finite or only to infinite analysis.

(See also: 17th Century Theories of Substance.)

Rationalism is often criticized for placing too much confidence in the ability of reason alone to know the world. The extent to which one finds this criticism justified depends largely on ones view of reason. For Hume, for instance, knowledge of the world of matters of fact is gained exclusively through experience; reason is merely a faculty for comparing ideas gained through experience; it is thus parasitic upon experience, and has no claim whatsoever to grasp anything about the world itself, let alone any special claim. For Kant, reason is a mental faculty with an inherent tendency to transgress the bounds of possible experience in an effort to grasp the metaphysical foundations of the phenomenal realm. Since knowledge of the world is limited to objects of possible experience, for Kant, reason, with its delusions of grasping reality beyond those limits, must be subject to critique.

Sometimes rationalism is charged with neglecting or undervaluing experience, and with embarrassingly having no means of accounting for the tremendous success of the experimental sciences. While the criticism of the confidence placed in reason may be defensible given a certain conception of reason (which may or may not itself be ultimately defensible), the latter charge of neglecting experience is not; more often than not it is the product of a false caricature of rationalism

Descartes and Leibniz were the leading mathematicians of their day, and stood at the forefront of science. While Spinoza distinguished himself more as a political thinker, and as an interpreter of scripture (albeit a notorious one) than as a mathematician, Spinoza too performed experiments, kept abreast of the leading science of the day, and was renowned as an expert craftsman of lenses. Far from neglecting experience, the great rationalists had, in general, a sophisticated understanding of the role of experience and, indeed, of experiment, in the acquisition and development of knowledge. The fact that the rationalists held that experience and experiment cannot serve as foundations for knowledge, but must be fitted within, and interpreted in light of, a rational epistemic framework, should not be confused with a neglect of experience and experiment.

One of the stated purposes of Descartes Meditations, and, in particular, the hyperbolic doubts with which it commences, is to reveal to the mind of the reader the limitations of its reliance on the senses, which Descartes regards as an inadequate foundation for knowledge. By leading the mind away from the senses, which often deceive, and which yield only confused ideas, Descartes prepares the reader to discover the clear and distinct perceptions of the pure intellect, which provide a proper foundation for genuine knowledge. Nevertheless, empirical observations and experimentation clearly had an important role to play in Descartes natural philosophy, as evidenced by his own private empirical and experimental research, especially in optics and anatomy, and by his explicit statements in several writings on the role and importance of observation and experiment.

In Part 6 of the Discourse on the Method, Descartes makes an open plea for assistance both financial and otherwise in making systematic empirical observations and conducting experiments. Also in Discourse Part 6, Descartes lays out his program for developing knowledge of nature. It begins with the discovery of certain seeds of truth implanted naturally in our souls (CSM I, 144). From them, Descartes seeks to derive the first principles and causes of everything. Descartes Meditations illustrates these first stages of the program. By seeds of truth Descartes has in mind certain intuitions, including the ideas of thinking, and extension, and, in particular, of God. On the basis of clearly and distinctly perceiving the distinction between what belongs properly to extension (figure, position, motion) and what does not (colors, sounds, smells, and so forth), Descartes discovers the principles of physics, including the laws of motion. From these principles, it is possible to deduce many particular ways in which the details of the world might be, only a small fraction of which represent the way the world actually is. It is as a result of the distance, as it were, between physical principles and laws of nature, on one hand, and the particular details of the world, on the other, that, for Descartes, observations and experiments become necessary.

Descartes is ambivalent about the relationship between physical principles and particulars, and about the role that observation and experiment play in mediating this relationship. On the one hand, Descartes expresses commitment to the ideal of a science deduced with certainty from intuitively grasped first principles. Because of the great variety of mutually incompatible consequences that can be derived from physical principles, observation and experiment are required even in the ideal deductive science to discriminate between actual consequences and merely possible ones. According to the ideal of deductive science, however, observation and experiment should be used only to facilitate the deduction of effects from first causes, and not as a basis for an inference to possible explanations of natural phenomena, as Descartes makes clear at one point his Principles of Philosophy (CSM I, 249). If the explanations were only possible, or hypothetical, the science could not lay claim to certainty per the deductive ideal, but merely to probability.

On the other hand, Descartes states explicitly at another point in the Principles of Philosophy that the explanations provided of such phenomena as the motion of celestial bodies and the nature of the earths elements should be regarded merely as hypotheses arrived at on the basis of a posteriori reasoning (CSM I, 255); while Descartes says that such hypotheses must agree with observation and facilitate predictions, they need not in fact reflect the actual causes of phenomena. Descartes appears to concede, albeit reluctantly, that when it comes to explaining particular phenomena, hypothetical explanations and moral certainty (that is, mere probability) are all that can be hoped for.

Scholars have offered a range of explanations for the inconsistency in Descartes writings on the question of the relation between first principles and particulars. It has been suggested that the inconsistency within the Principles of Philosophy reflects different stages of its composition (see Garber 1978). However the inconsistency might be explained, it is clear that Descartes did not take it for granted that the ideal of a deductive science of nature could be realized. Moreover, whether or not Descartes ultimately believed the ideal of deductive science was realizable, he was unambiguous on the importance of observation and experiment in bridging the distance between physical principles and particular phenomena. (For further discussion, see Ren Descartes: Scientific Method.)

The one work that Spinoza published under his own name in his lifetime was his geometrical reworking of Descartes Principles of Philosophy. In Spinozas presentation of the opening sections of Part 3 of Descartes Principles, Spinoza puts a strong emphasis on the hypothetical nature of the explanations of natural phenomena in Part 3. Given the hesitance and ambivalence with which Descartes concedes the hypothetical nature of his explanations in his Principles, Spinozas unequivocal insistence on hypotheses is striking. Elsewhere Spinoza endorses hypotheses more directly. In the Treatise on the Emendation of the Intellect, Spinoza describes forming the concept of a sphere by affirming the rotation of a semicircle in thought. He points out that this idea is a true idea of a sphere even if no sphere has ever been produced this way in nature (The Collected Works of Spinoza, Vol. 1, p. 32). Spinozas view of hypotheses relates to his conception of good definitions (see Section 3, b, above). If the cause through which one conceives something allows for the deduction of all possible effects, then the cause is an adequate one, and there is no need to fear a false hypothesis. Spinoza appears to differ from Descartes in thinking that the formation of hypotheses, if done properly, is consistent with deductive certainty, and not tantamount to mere probability or moral certainty.

Again in the Treatise on the Emendation of the Intellect, Spinoza speaks in Baconian fashion of identifying aids that can assist in the use of the senses and in conducting orderly experiments. Unfortunately, Spinozas comments regarding aids are very unclear. This is perhaps explained by the fact that they appear in a work that Spinoza never finished. Nevertheless, it does seem clear that although Spinoza, like Descartes, emphasized the importance of discovering proper principles from which to deduce knowledge of everything else, he was no less aware than Descartes of the need to proceed via observation and experiment in descending from such principles to particulars. At the same time, given his analysis of the inadequacy of sensory images, the collection of empirical data must be governed by rules and rational guidelines the details of which it does not seem that Spinoza ever worked out.

A valuable perspective on Spinozas attitude toward experimentation is provided by Letter 6, which Spinoza wrote to Oldenburg with comments on Robert Boyles experimental research. Among other matters, at issue is Boyles redintegration (or reconstitution) of niter (potassium nitrate). By heating niter with a burning coal, Boyle separated the niter into a fixed part and a volatile part; he then proceeded to distill the volatile part, and recombine it with the fixed part, thereby redintegrating the niter. Boyles aim was to show that the nature of niter is not determined by a Scholastic substantial form, but rather by the composition of parts, whose secondary qualities (color, taste, smell, and so forth) are determined by primary qualities (size, position, motion, and so forth). While taking no issue with Boyles attempt to undermine the Scholastic analysis of physical natures, Spinoza criticized Boyles interpretation of the experiment, arguing that the fixed niter was merely an impurity left over, and that there was no difference between the niter and the volatile part other than a difference of state.

Two things stand out from Spinozas comments on Boyle. On the one hand, Spinoza exhibits a degree of impatience with Boyles experiments, charging some of them with superfluity on the grounds either that what they show is evident on the basis of reason alone, or that previous philosophers have already sufficiently demonstrated them experimentally. In addition, Spinozas own interpretation of Boyles experiment is primarily based in a rather speculative, Cartesian account of the mechanical constitution of niter (as Boyle himself points out in response to Spinoza). On the other hand, Spinoza appears eager to show his own fluency with experimental practice, describing no fewer than three different experiments of his own invention to support his interpretation of the redintegration. What Spinoza is critical of is not so much Boyles use of experiment per se as his relative neglect of relevant rational considerations. For instance, Spinoza at one point criticizes Boyle for trying to show that secondary qualities depend on primary qualities on experimental grounds. Spinoza thought the proposition needed to be demonstrated on rational grounds. While Spinoza acknowledges the importance and necessity of observation and experiment, his emphasis and focus is on the rational framework needed for making sense of experimental findings, without which the results are confused and misleading.

In principle, Leibniz thinks it is not impossible to discover the interior constitution of bodies a priori on the basis of a knowledge of God and the principle of the best according to which He creates the world. Leibniz sometimes remarks that angels could explain to us the intelligible causes through which all things come about, but he seems conflicted over whether such understanding is actually possible for human beings. Leibniz seems to think that while the a priori pathway should be pursued in this life by the brightest minds in any case, its perfection will only be possible in the afterlife. The obstacle to an a priori conception of things is the complexity of sensible effects. In this life, then, knowledge of nature cannot be purely a priori, but depends on observation and experimentation in conjunction with reason

Apart from perception, we have clear and distinct ideas only of magnitude, figure, motion, and other such quantifiable attributes (primary qualities). The goal of all empirical research must be to resolve phenomena (including secondary qualities) into such distinctly perceived, quantifiable notions. For example, heat is explained in terms of some particular motion of air or some other fluid. Only in this way can the epistemic ideal be achieved of understanding how phenomena follow from their causes in the same way that we know how the hammer stroke after a period of time follows from the workings of a clock (L, 173). To this end, experiments must be carried out to indicate possible relationships between secondary qualities and primary qualities, and to provide a basis for the formulation of hypotheses to explain the phenomena.

Nevertheless, there is an inherent limitation to this procedure. Leibniz explains that if there were people who had no direct experience of heat, for instance, even if someone were to explain to them the precise mechanical cause of heat, they would still not be able to know the sensation of heat, because they would still not distinctly grasp the connection between bodily motion and perception (L, 285). Leibniz seems to think that human beings will never be able to bridge the explanatory gap between sensations and mechanical causes. There will always be an irreducibly confused aspect of sensible ideas, even if they can be associated with a high degree of sophistication with distinctly perceivable, quantifiable notions. However, this limitation does not mean, for Leibniz, that there is any futility in human efforts to understand the world scientifically. In the first place, experimental knowledge of the composition of things is tremendously useful in practice, even if the composition is not distinctly perceived in all its parts. As Leibniz points out, the architect who uses stones to erect a cathedral need not possess a distinct knowledge of the bits of earth interposed between the stones (L, 175). Secondly, even if our understanding of the causes of sensible effects must remain forever hypothetical, the hypotheses themselves can be more or less refined, and it is proper experimentation that assists in their refinement.

When citing the works of Descartes, the three volume English translation by Cottingham, Stoothoff, Murdoch, and Kenny was used. For the original language, the edition by Adam and Tannery was consulted.

When citing Spinozas Ethics, the translation by Curley in A Spinoza Reader was used. The following system of abbreviation was used when citing passages from the Ethics: the first number designates the part of the Ethics (1-5); then, p is for proposition, d for definition, a for axiom, dem for demonstration, c for corollary, and s for scholium. So, 1p17s refers to the scholium of the seventeenth proposition of the first part of the Ethics. For the original language, the edition by Gebhardt was consulted.

Follow this link:

Rationalism, Continental | Internet Encyclopedia of Philosophy

Free rationalism Essays and Papers – 123HelpMe

TitleLengthColor Rating Empiricism and Rationalism - Philosophy uses a term for empirical knowledge, posteriori, meaning that knowledge is dependent upon sense experience. (Markie, 2008, section 1.2) Yet, philosophical empiricism is defined in such an absolute way; which causes philosophical empiricism to be an inaccurate philosophical position from which to address all aspects of human life. Philosophical empiricism is defined as the belief that all human knowledge arises from sense experience. (Nash, 1999, page 254) Yet, medical empiricism is so far to the other extreme as to be insulting, while this empiricism is still said to be based on all sensory experience; only the scientific sensory experience is valued and counted.... [tags: philosophy]:: 5 Works Cited 1014 words(2.9 pages)Strong Essays[preview] Rationalism vs. Empiricism: The Argument for Empricism - There are two main schools of thought, or methods, in regards to the subject of epistemology: rationalism and empiricism. These two, very different, schools of thought attempt to answer the philosophical question of how knowledge is acquired. While rationalists believe that this process occurs solely in our minds, empiricists argue that it is, instead, through sensory experience. After reading and understanding each argument it is clear that empiricism is the most relative explanatory position in epistemology.... [tags: Philosophy ]:: 2 Works Cited 846 words(2.4 pages)Better Essays[preview] Rationalism vs. Empiricism - Rationalism and empiricism were two philosophical schools in the 17th and 18th centuries, that were expressing opposite views on some subjects, including knowledge. While the debate between the rationalist and empiricist schools did not have any relationship to the study of psychology at the time, it has contributed greatly to facilitating the possibility of establishing the discipline of Psychology. This essay will describe the empiricist and rationalist debate, and will relate this debate to the history of psychology.... [tags: Philosophy]:: 8 Works Cited 1587 words(4.5 pages)Powerful Essays[preview] Empiricism and Rationalism: Searching for God and Truth - We live in a time where everyone is searching for a reason to believe in something, there have posters and advertisement stating that Only Prayer Can Save America. Well if prayer can save us then there is only one question left to be answered. Who are we praying to. What are we praying for. God is the almighty, the creator of everything and without him there would be no world and no us. But many people seem to question if He really exist. In the world there are many streams of philosophy that have argued the existence of God, Platonism, naturalism, Aristotelianism, realism, empiricism, and rationalism they have even tried to convince nonbelievers about the defensibility and validity of God... [tags: Philosophy ]:: 3 Works Cited 839 words(2.4 pages)Better Essays[preview] Knowledge Acquisition: Empiricism vs Rationalism - For this critical analysis essay, I am writing on the following discussion post: "Rationalism is more via[b]le than empiricism in regards to knowledge. Empiricism may have the data and research to support its claims, but Rationalism strives to prove its evidence through reason. Using the example in our text book, the number 2 can never be greater than the number 3 - it is just plain illogical and does not make any sense to think or state that. Our reason for defending this claim is that using our priori, or from the former, states that we do not physically have to experience the number 3 being greater than the number 2 (the nature of numbers is gray area).... [tags: Critical Analysis Essay]:: 3 Works Cited 1780 words(5.1 pages)Powerful Essays[preview] Empiricism Versus Rationalism: Descartes and Hume - Rationalism and empiricism have always been on opposite sides of the philosophic spectrum, Rene Descartes and David Hume are the best representative of each school of thought. Descartes rationalism posits that deduction, reason and thus innate ideas are the only way to get to true knowledge. Empiricism on the other hand, posits that by induction, and sense perception, we may find that there are in fact no innate ideas, but that truths must be carefully observed to be true. Unlike one of empiricisms major tenets, Tabula Rasa, or blank slate, Descartes believed that the mind was not a blank slate, but actually came pre-loaded, if you will, with ideas, which are part of our rational nature an... [tags: philosophy, god, science]541 words(1.5 pages)Good Essays[preview] Rationalism in America: The Age That Shaped the World - It can be said, but not denied, that the United States of America is one of the most powerful countries in the world today, and has been for arguably the last one hundred years. With its political agendas and military strength it shapes governments; with its social trends and values it shapes cultures. But what, exactly, shaped the United States. The various worldviews that have sprouted from Western philosophy is the most obvious answer, but, to be more specific, it is how those worldviews were adopted that were of the most significance.... [tags: U.S. History ]:: 3 Works Cited 1878 words(5.4 pages)Term Papers[preview] Rene Descartes is a Rationalist - There is a distinct difference between rationalism and empiricism. In fact, they are very plainly the direct opposite of each other. Rationalism is the belief in innate ideas, reason, and deduction. Empiricism is the belief in sense perception, induction, and that there are no innate ideas. With rationalism, believing in innate ideas means to have ideas before we are born.-for example, through reincarnation. Plato best explains this through his theory of the forms, which is the place where everyone goes and attains knowledge before they are taken back to the visible world.... [tags: Rationalism vs Empiricism]:: 3 Works Cited 716 words(2 pages)Strong Essays[preview] Rationalism and Empiricism - Rationalism and Empiricism Rationalism and Empiricism are most likely the two most famous and intriguing schools of philosophy. The two schools deal specifically with epistemology, or, the origin of knowledge. Although not completely opposite, they are often considered so, and are seen as the "Jordan vs. Bird" of the philosophy world. The origins of rationalism and empiricism can be traced back to the 17th century, when many important advancements were made in scientific fields such as astronomy and mechanics.... [tags: Philosophy Epistemology Papers]:: 2 Works Cited 1485 words(4.2 pages)Powerful Essays[preview] The Rationalism of Descartes and Leibniz - The Rationalism of Descartes and Leibniz Although philosophy rarely alters its direction and mood with sudden swings, there are times when its new concerns and emphases clearly separate it from its immediate past. Such was the case with seventeenth-century Continental rationalism, whose founder was Rene Descartes and whose new program initiated what is called modern philosophy. In a sense, much of what the Continental rationalists set out to do had already been attempted by the medieval philosophers and by Bacon and Hobbes.... [tags: Papers]1674 words(4.8 pages)Strong Essays[preview] When Rationalism and Empiricism Collide: the Best of Both Worlds - For a lengthy period of time, philosophers have been fiercely debating the classification of philosophical epistemology into two categories: rationalism and empiricism. Empiricism is the idea that knowledge can only be gained through obtaining facts via observation or experimentation, while rationalism is obtaining knowledge through logical reasoning . Though rationalism and empiricism are very viable methods of thought in philosophy on their own, these philosophical schools arguments become much stronger when used in conjunction.... [tags: Philosophy ]:: 4 Works Cited 1310 words(3.7 pages)Strong Essays[preview] The Role of Naturalism and Rationalism in American and British Gun Policy - Although they may not be aware of it, complex philosophic principles influence the simple actions of the masss everyday lives. In fact, long lasting and well defined contentions of basic philosophy concerning the actions of human beings has not only affected individuals, but also entire countries. Some of the greatest nations on Earth have been formed around key thoughts and opinions of several great philosophers. Primarily amongst these, however, or John Locke and Thomas Hobbes, both of whom wrote on The State of Nature, or the state of absolute freedom.... [tags: Gun Control Laws]753 words(2.2 pages)Better Essays[preview] Extreme Rationalism - Extreme Rationalism Rationalism is the idea that we can gain knowledge through the processes of mind alone. Empiricism is the idea that we can only gain knowledge through the senses. Empiricism has been adopted by the Western world because it is the foundation of the scientific approach to life that we use. Various popular sayings such as 'seeing is believing' and 'I heard it with my own ears', show that we accept the use of the senses without question.... [tags: Papers]539 words(1.5 pages)Good Essays[preview] The Effect of Rene Descartes and David Hume on the Philosophical World - Rene Descartes and David Hume both have had a profound effect on the philosophical world. Both these philosophers are associated explicitly with two separate schools of philosophy which are Rationalism and Empiricism. It is this division between Rationalism and Empiricism that allows for Descartes and Hume to present differing accounts of the mind and mentality. Descartes is widely recognized as the father of modern philosophy, he is a rationalist, who considers knowledge of the metaphysical as existing separate from physical reality believing that truth cannot be acquired through the senses but through the intellect in the form of deductive reasoning.... [tags: rationalism, empirism]1078 words(3.1 pages)Strong Essays[preview] Philosophy of Immanuel Kant - There are different views about how we gain knowledge of the world, through our senses or through our minds, and although many say that it is one or the other I believe that although we gain some knowledge through sense data not all of our ideas come from these impressions. There are those who stand on the side of empiricism, like David Hume, and those who stand on the side of rationalism, like Ren Descartes; then there are also those who believe that one can have a foot on both sides, like Immanuel Kant.... [tags: rationalism, empiricism]:: 4 Works Cited 1411 words(4 pages)Powerful Essays[preview] Comparing the Approaches of Rationalism and Empiricism Towards a Theory of Knowledge - Comparing the Approaches of Rationalism and Empiricism Towards a Theory of Knowledge Rationalism ----------- Rene Descartes was the main rationalist. He said he believed he had to doubt everything known to him to really understand knowledge. Rationalism first began in Ancient Greece with two extreme rationalists - Parmenides and Zeno. Rationalists believed in innate ideas - ones that are present at birth, in the mind. When Descartes started his thoughts, it was in the 17th century, during the rise of science.... [tags: Papers]986 words(2.8 pages)Good Essays[preview] How Rationalism Changed the World Between 1650-1750 - How Rationalism Changed the World Between 1650-1750 Many changes took place between 1650 and 1750. There was territory expansion as European countries started colonies in the Americas. Political changes took place as well, as people began to rise up against the government. New economic standards were set as people realized that having money was not the way to economic control but by controlling the means of production. Britain, France, and Spain were busy establishing colonies in the Americas.... [tags: Papers]329 words(0.9 pages)Strong Essays[preview] The Age of Enlightenment or Age of Reason Analysis - The Age of Enlightenment also known as the Age of Reason took place around Europe between the 17th and 18th century. It was a movement that took place to emphasize the use of reason and science in the world. In addition, it was to enlighten or shed light upon the use of factual reasoning and promote the use of evidence when doing things. Thinkers and well-known philosophers of the time such as Voltaire, Diderot, D'Alembert, Descartes, Montesquieu and more were beginning to understand and promote reasoning beyond the traditional ways of doing things.... [tags: reasoning, enlightment, rationalism]648 words(1.9 pages)Better Essays[preview] Analysis of Rathenau Paper on Policy and the Evidence Beast - ... Nevertheless, the empiric view of knowledge on which positivism is based has long been subject to limitations. Immanuel Kant noted for instance that knowledge does not only come from the senses but also from a basic pallet of conceptual knowledge we all have. Furthermore, the interpretation of observations can differ due to the different way everyone acquires concepts. The claims done by Staman and Slob (2012) mentioned earlier are analyzed below for using this perspective on science. The aforementioned claims (k1,k2 an k3) are all subject to reliability issues if true knowledge is assumed to only come from what is observable or inductive.... [tags: rationalism and social-constructivism]:: 3 Works Cited 1263 words(3.6 pages)Term Papers[preview] To Know Divine Revelation, We Must Understand How Faith and Reason Work - ... In order to understand how reason complements faith it is necessary to know how each works separately. Reason is the mode or act of thinking; by extension it comes to designate on the one hand the faculty of thinking and on the other the formal element of thought, such as plan, account, ground, etc. With reason humans have the power of the intellect to know universal truth. Thanks to reason humans are different from animals, as Aristotle once said men is a rational animal, without reason we would not have the capacity to know the truth.... [tags: God, salvation, rationalism]633 words(1.8 pages)Better Essays[preview] Analysis of Rene Descartes' Meditations on First Philosophy - Rene Descartes Meditations on First Philosophy Rene Descartes set the groundwork for seventeenth century rationalism, the view opposed by the empiricist school of thought. As a rationalist, Descartes firmly believed in reason as the principal source of knowledge. He favoured deduction and intellect over the senses and because of this he did not find comfort in believing that his opinions, which he had developed in his youth, were credible. It is for this reason that Rene Descartes chose to raze everything to the ground and begin again from the original foundations, (13).... [tags: rationalism, doubt, knowledge]:: 1 Works Cited 1319 words(3.8 pages)Strong Essays[preview] Adam Smith: A Brilliant Thinker from the Enlightenment - The Enlightenment was during the eighteenth century, it had brought new ways of philosophy and new ways of thinking. The big idea of the enlightenment was taking old ideals and seeing how they can be improved and altered. Everything that was proved or discovered had to come through some sort of reason, either from experimentation or practical practice. The enlightenment had included many brilliant thinkers, in which one of them is Adam Smith. Adam Smith is considered the father of the science of political economy, he had thought up the idea of capitalism which had included the invisible hand theory, the idea of self-interest and laissez-faire, which states that businesses are free to act how... [tags: capitalism, rationalism, empiricism]1221 words(3.5 pages)Better Essays[preview] Approaches to the Construction of Knowledge - ... Similarly, I defined the term systematic organization as a precise procedure which occurs and can reoccur and whose resulting product can be compartmentalized and build upon itself. The natural sciences such as biology or chemistry often use this method to support the information they present to the scientific community and to society. In my own experience with biology, there is great stress placed upon following the scientific procedure when conducting experiments in order to ensure accurate results.... [tags: systematic organization, rationalism, history]1353 words(3.9 pages)Strong Essays[preview] Pragmatism as a Philosophy - I have often heard people use the word pragmatic to describe actions, laws or feelings, but I never really looked at pragmatism as a philosophy before. As we studied this semester I found myself asking one question about each philosophy we covered. We discussed skepticism and the claim that we have no knowledge (Lawhead, W., The Philosophical Journey, 2009, p. 55). We compared rationalism and empiricism which posit that we do have knowledge, but disagree on whether that knowledge comes from intellect or experience (Lawhead, p.... [tags: Skepticism, Rationalism, Metaphysics]892 words(2.5 pages)Strong Essays[preview] Descartes and Aristotle - People live life one day at time with the same guidance from their ancestors, and they often question their existence in the universe and try to understand the world around them. People often question their existence in the universe. Philosophers try to answer questions that most people will not think of in their daily lives. Most philosophers try to get to the truth of logical questions through epistemology. Epistemology is a branch of philosophy that studies the nature and possibility of knowledge (Soccio).... [tags: Rationalism, Priori, Posteriori, Philosophers]:: 2 Works Cited 1353 words(3.9 pages)Strong Essays[preview] UCSB as a Rationalist Organization - The University of California Santa Barbara is an organization revolving around students and faculty alike. Any organizations can reflect two contrasting perspectives, a Naturalist or Rationalist, which underlines and questions the ideas of structure and formality. Naturalist organizations convey informality due to the basis on the flow of the members behavior and relationships among others. But nonetheless, Rationalist organization is formal because the organizations fluidity is based on the members limits and structure.... [tags: Rationalist organization essay]1116 words(3.2 pages)Strong Essays[preview] Rationalim and Fascist Politics - Ghirardo points out that the relationship between modern architecture and fascism is not as clear as recent analysis might have it. What do you think was the aspiration for modern 'rational' architecture and why would it be associated with socialist politics. Further, why was there such a close relationship between modern architecture and fascism in Italy in the pre-war years, but not in germany. Rationalism was one of the key movements in Italy after world war one. It set about broadening the scope of modern architecture by formulating clear strategies for dealing with the industrialisation and urbanisation of Italy .... [tags: Modern Architecture]1401 words(4 pages)Powerful Essays[preview] The Romantic Movement - The Romantic Movement (1800-1850) Art as Emotion The goal of self-determination that Napoleon imported to Holland, Italy, Germany and Austria affected not only nations but also individuals. England's metamorphosis during the Industrial Revolution was also reflected in the outlook of the individual, and therefore in the art produced during the first half of this century. Heightened sensibility and intensified feeling became characteristic of the visual arts as well as musical arts and a convention in literature.... [tags: Rationalism Romanticism Landscape]:: 1 Works Cited 568 words(1.6 pages)Strong Essays[preview] Consciousness - Most people would think of consciousness to be their inner thoughts or the awareness one has of themselves and their surroundings. My Introduction to Psychology textbook defines consciousness as, the subjective experience of perceiving oneself and ones surroundings. (Kalat, 2011, p.342). According to Oxford dictionary it can be defined in philosophy as The state or faculty of being conscious, as a condition and concomitant of all thought, feeling, and volition; the recognition by the thinking subject of its own acts or affections (Schwarz, 2004, p.425).... [tags: rationalism, empiricism]:: 5 Works Cited 1714 words(4.9 pages)Powerful Essays[preview] Return to Curiosity: Privileging Wonder over Rationalism in Museum Displays and Learning - ... (Robinson, 2008) Although it is unclear whether the decline in divergent thinking over the course of childhood is directly caused by present education systems, it is clear that most learning energy is directed towards linear and rational forms of thinking. In Resonance and Wonder, Stephen Greenblatt describes the museum as a repository for traces of culture and talks about the resonance of objects. He writes: By resonance I mean the power of the object displayed to reach out beyond its formal boundaries to a larger world, to evoke in the viewer the complex, dynamic cultural forces from which it has emerged and for which as metaphor or more simply synecdoche it may be taken by a viewer t... [tags: museum organization, contextualization]:: 13 Works Cited 1449 words(4.1 pages)Term Papers[preview] Realism and Literature - In the late eighteenth century, a movement spread throughout the world that was known as the Romantic Era. The works of authors, artists, and musicians were influenced by emotions and imagination. Characters in literature during that time period heavily relied on impulses to guide them in their decisions. Whether it be the logical choice or not, they followed their hearts instead. The image that romanticism created was one of a perfect, unrealistic lifestyle because of the worship to the beauty of nature and human emotions.... [tags: Rationalism, Logic]:: 13 Works Cited 926 words(2.6 pages)Better Essays[preview] The University of California Santa Barbara as a Rationalist Organization - The University of California Santa Barbara is an organization that revolves around students and faculty alike. Organizations, as a whole, can reflect two contrasting perspectives, Naturalist or Rationalist, that underlines and questions the ideas of structure and formality. A Naturalist organizations highlights informality because it is based on the flow of the members behavior and relationships among others. However, a Rationalist organization is formal because the organizations fluidity is based on the members limits and structure.... [tags: informative essay]1158 words(3.3 pages)Strong Essays[preview] Othello: Admirable Leader but Poor Rationalist - In William Shakespeares Othello, the main character is presented as an admirable leader but a poor rationalist. He is recognized as a hero with the qualities of vigor, charm, and eloquence. However these principles of leadership arent always viewed as the criteria for a leader. The battleground is, to Othello at least, is depicted as a place of admiration, where men speak truthfully to one another. Also, the given circumstances of state and warfare are rather straightforward; no one deceives Othello because as leader he should be esteemed.... [tags: Shakespearean Literature ]:: 2 Works Cited 975 words(2.8 pages)Better Essays[preview] Influences of the Rationalist, Structuralist and Culturalist Theoretical Approaches on Comparative Politics - ... Political theorist Antonio Gramsci pointed out that coherence between these two schools of thought can be found when considering the fact that whilst, according to Marxist teachings, capitalist societies are based on underlying structural conflict between the proletariat and the bourgeoisie, the manifestation of such conflict is dependent on the cultural circumstances of the country concerned. Similarly to culturalists, structuralists adopt a form of methodological holism. Structuralists task themselves with identifying the underlying dynamics that govern social systems as a whole, and upon doing so are able to embark on comparison between larger groups of countries governed by similar s... [tags: behavior, cost, society]1615 words(4.6 pages)Powerful Essays[preview] The Main Models of Comparative Politics - ... The rationalist school of modeling has a common ancestry with economics. Adam Smith, the prominent economic theorist and advocate of the free market is credited with helping to lay out the path for this model. Many of the predominate thinkers in this school of modeling were also established economists, for example Anthony Downs and Mancur Olson. This background in economics was a clear influence on these important contributors, as many of the approaches they take are borrowed from economics.... [tags: rationalist, structuralist, culturalist]764 words(2.2 pages)Better Essays[preview] Rationality of Organizations and Management Theories - ... It assumes that workers are lazy and cannot handle complicate works. The mangers work is to issue simple tasks to their subordinates and closely monitor them. A firmly, fairly and detailed work routines and procedures should established too. Under this model, people are expected to be enduring if they pay well and they will produce up to standard because the tasks issued are simple enough and the progresses are closely controlled. The second one is human relation model. It assumes people want to gain self- confidence by getting achievement in their career.... [tags: substantive rationality, human resources]:: 6 Works Cited 1756 words(5 pages)Term Papers[preview] The Philosophical Legacy of the 16th and 17th Century Socinians: Their Rationality - The Philosophical Legacy of the 16th and 17th Century Socinians: Their Rationality ABSTRACT: The doctrines of the Socinians represent a rational reaction to a medieval theology based on submission to the Churchs authority. Though they retained Scripture as something supra rationem, the Socinians analyzed it rationally and believed that nothing should be accepted contra rationem. Their social and political thought underwent a significant evolutionary process from a very utopian pacifistic trend condemning participation in war and holding public and judicial office to a moderate and realistic stance based on mutual love, support of the secular power of the state, active participation in soci... [tags: Philosophy Religion Essays]:: 4 Works Cited 2830 words(8.1 pages)Strong Essays[preview] Education: Empiricists vs Rationalists - The importance of experience in education has always been the subject of philosophical debates. These debates between empiricists and rationalists have been going on for quite some time. Rationalists are of the view that knowledge acquired through senses is unreliable and learning can only be done through reasoning. On the other hand, empiricists believe knowledge is acquired through empirical impressions and concepts that cannot be learnt without being experienced (Evans, 1992, p. 35). This debate was however resolved by Kant who argues that both experience and rationality are necessary in learning.... [tags: philosophy of education]:: 7 Works Cited 1089 words(3.1 pages)Strong Essays[preview] How Is the Conflict between Rationality and Irrationality Developed in "Death in Venice?" - The purpose of this essay is to examine the conflict between rationality and irrationality in Death in Venice and to assess how this conflict is developed and possibly resolved. This conflict is fought and described throughout the short story with reference to ancient Greek gods, predominately Apollo and Dionysus and through the philosopher and philosophy of Plato. Through contemporary influences such as Schopenhauer and Nietzsche, Mann further reflects on these ancient sources through a modern prism and this he does in this tale of life and death of the protagonist Aschenbach.... [tags: European Literature]1997 words(5.7 pages)Powerful Essays[preview] A Rationalize of Why People Use Skin Bleaching Products - This paper looks at the way the guy I mean people rationalize their use of skin bleaching products. It also looks at the forces that have led to this predicament of shame and ugliness in any skin tone other than white skin. I have also looked into the psychological and physical effects of colorism on the people of Ghana. The idea of Colorism is not new. It is only recently that a name has been placed on it and it has been studied. Countries that have people with various skin tones have always practiced ways to lighten their skin.... [tags: colorism, beauty, advertisements]1031 words(2.9 pages)Strong Essays[preview] Rats and Rationality by Joel Marks - Rats and Rationality by Joel Marks As the scientists Jonathan Crystal and Allison Foote have found that rats have high mental power, the report of the research suggests that rats can be used in future neuroscience experiments. As a result, the usage of rats in the neuroscience experiments will be increased. The author of the article, Rats and Rationality, Joel Marks argues against this proposal and emphasizes that the usage of rats in the experiments should be decreased. Mark argues that the conclusion of the research to use rats in neuroscience experiment is illogical.... [tags: Article Review]:: 1 Works Cited 982 words(2.8 pages)Better Essays[preview] can rationality and morality coincide - To begin, one can define rationality as a quality of being agreeable to reason. It is when a person does the correct or valid reason in his or her head. It is the correct thing that one honestly considers to be the right thing. On the other hand, morality can be defined as the quality to act properly, it is the way a person conducts or behaves. Morality is about the rightness or the wrongness of something. A good example for morality is that the way a person treats another which can be like if a person needs respect from another, he or she has to show respect to others.... [tags: Value Domains, Rational Agent]:: 2 Works Cited 979 words(2.8 pages)Better Essays[preview] The Economic Rationality Assumption - The economic rationality assumption has given an important connation for the market efficiency, as it has been the base to carry out the construction of the modern knowledge in standard finance. Resulting in the development of the most important insights in finance, such as arbitrage pricing theory of Miller and Modigliani, the Markowitz portfolio optimization, the capital asset pricing theory of Sharp, Lintner and Sharp and the option-pricing model of Black, Scholes and Merton (Pompian, 2006 and Lo, 2005).... [tags: Arbitrage, Finance]1240 words(3.5 pages)Good Essays[preview] Distinguished Ways To Achieve Knowledge: A Priori and A Posteriori. - When it comes to knowledge, the main focus of philosophers is propositional knowledge or knowing that something is or is not the case (Vaughn, 254). Philosophers believe that propositional knowledge has three necessary conditions to know a proposition: believe it, it must be true, and we must have good reasons to justify why it is true (Vaughn, 254). In other words, just because we believe in something, it does not make it true. Now in order to have knowledge, our beliefs must be true, and we must have sound reasons to believe that they are true.... [tags: descartes, rationalists, empiricists]:: 1 Works Cited 994 words(2.8 pages)Better Essays[preview] Analysis of Rationality In A Midsummer Night's Dream - William Shakespeares A Midsummer Nights Dream is not simply a light-hearted comedy; it is a study of the abstract. Shakespeare shows that the divide between the dream world and reality is inconstant and oftentimes indefinable. Meanwhile, he writes about the power of the intangible emotions, jealousy and desire, to send the natural and supernatural worlds into chaos. Love and desire are the driving forces of this plays plot, leaving the different characters and social classes to sort out the resulting pandemonium.... [tags: Class Division, Abstract Thought, Shakespeare]1061 words(3 pages)Strong Essays[preview] Rationality of Financial Markets on Investment Variables - The rationality of financial markets has been one of the most hotly contested issues in the history of modern financial economics. Recent critics of the Efficient Markets Hypothesis argue that investors are generally irrational, exhibiting a number of predictable and financially ruinous biases such as overconfidence, overreaction, loss aversion, herding, psychological accounting, miscalibration of probabilities, and regret. The sources of these irrationalities are often attributed to psychological factors fear, greed, and other emotional responses to price fluctuations and dramatic changes in an investors wealth.... [tags: irrational, psychological, investment]543 words(1.6 pages)Good Essays[preview] Natural Law, Rationality and the Social Contract - Each day, billions of people throughout the world affirm their commitment to a specific idea; to be part of a society. While this social contract is often overlooked by most citizens, their agreement to it nevertheless has far-reaching consequences. Being a member of society entails relinquishing self-autonomy to a higher authority, whose aim should be to promote the overall good of the populace. While making this decision to become part of a commonwealth is usually performed without explicit deliberation, there is a common consensus amongst philosophers that something unique to the human experience is the driving force behind this decision.... [tags: Philosophy, Sociology, Informative]2087 words(6 pages)Powerful Essays[preview] Expertise and Rationality - Expertise and Rationality ABSTRACT: I explore the connection between expertise and rationality. I first make explicit the philosophically dominant view on this connection, i.e., the expert-consultation view. This view captures the rather obvious idea that a rational way of proceeding on a matter of importance when one lacks knowledge is to consult experts. Next, I enumerate the difficulties which beset this view, locating them to some extent in the current philosophical literature on expertise and rationality.... [tags: Philosophy Philosophical Papers]:: 10 Works Cited 3305 words(9.4 pages)Strong Essays[preview] Ressentiment and Rationality - Ressentiment and Rationality ABSTRACT: This paper is an investigation of the condition of ressentiment. It reviews the two most prominent philosophic accounts of ressentiment: Nietzsche's genealogy of ressentiment as the moral perversion resulting from the ancient Roman/Palestinian cultural conflict and giving birth to the ascetic ideal; and Scheler's phenomenology of ressentiment as a complex affective unit generative of its own affects and values. A single sketch of the typical elements of ressentiment is drawn from the review of these two accounts.... [tags: Philosophy Philosophical papers]:: 2 Works Cited 3915 words(11.2 pages)Strong Essays[preview] Rationality in Humans - Contradiction is the nature of the society. If there is a religion, there will be those who do not believe. If there is a war, there will be those that want peace. If there is a political movement, there will be those that disagree. Humans are bound to go against their own believes, their own strategies, and their own establishments. Nothing is forever. History portrays people going against the accepted ideologies. It shows the everlasting change of the society. First, they thought that God was the explanation to everything.... [tags: European History]782 words(2.2 pages)Better Essays[preview] Instrumental Rationality and the Instrumental Doctrine - Instrumental Rationality and the Instrumental Doctrine ABSTRACT: In opposition to the instrumental doctrine of rationality, I argue that the rationality of the end served by a strategy is a necessary condition of the rationality of the strategy itself: means to ends cannot be rational unless the ends are rational. First, I explore cases-involving proximate ends (that is, ends whose achievement is instrumental to the pursuit of some more fundamental end) where even instrumentalists must concede that the rationality of a strategy presupposes the rationality of the end it serves.... [tags: Philosophy Philosophical Papers]:: 2 Works Cited 3442 words(9.8 pages)Strong Essays[preview] Psychopaths and the Future of Humanity - I first encountered the idea of a psychopath in Thomas Harris' thriller, the Silence of Lambs. Hannibal Lecter was deeply fascinating, and all the more frightening because he didn't look like a grotesque monster, a violent & bloodthirsty beast. Instead, it's a charming and intelligent character with a doctorate in psychology. His possible existence forced me to reflect, and sound the depths of darkness within. However, psychopaths remained only a curiosity until this quarter, when I encountered the idea of psychopaths again in the works of moral philosophers.... [tags: Psychology ]1460 words(4.2 pages)Better Essays[preview] Colonial Period Focused Around God and Church - ... An example of the strong belief that people had in God was the ferocity that Jonathan Edwards preached in this sermon Sinners in the Hands of an Angry God. Edwards rants on an hour long tangent about how God at any time can expel the wicked into the hands of the devil and how only Gods grace can save us. The emphasis and power that Edwards preached is enough to bring any man to his knees. This strong belief echoed throughout America; you didnt just have to look into a church to see it. Anne Bradstreets Verses upon the Burning House is a good example of the common ideal that God was the most mighty and that anything that happened, happened for a reason and was inflicted by God so i... [tags: american culture, puritans, indians]518 words(1.5 pages)Strong Essays[preview] The Objectivity and Rationality of Morality - The Objectivity and Rationality of Morality According to Kant morality is rational and objective. It is based on rational human reasoning. For Kant it is not the consequences of an action that make it moral but the reasoning or intention that goes behind the choices one makes. What Kant is saying is that the only thing which can be qualified as good is good intention.... [tags: Papers]1134 words(3.2 pages)Strong Essays[preview] Historical Types of Rationality - ABSTRACT: In this paper we suggest that the contemporary global intellectual crisis of our (Western) civilization consists in the fundamental transformation of the classical (both Ancient and Modern) types of rationality towards the nonclassical one. We give a brief account of those classical types of rationality and focus on the more detailed description of the contemporary process of the formation of the new HTR which we label as nonclassical. We consider it to be one of the historical possibilities that might radically transform the fundamentals of our human world; in fact, this process has already begun.... [tags: Culture History Essays]:: 11 Works Cited 3004 words(8.6 pages)Strong Essays[preview] Witchcraft, Magic and Rationality - Witchcraft, Magic and Rationality Social Anthropology seeks to gauge an understanding of cultures and practices whether they are foreign or native. This is achieved through the studying of language, education, customs, marriage, kinship, hierarchy and of course belief and value systems. Rationality is a key concept in this process as it affects the anthropologists interpretation of the studied groups way of life: what s/he deems as rational or plausible practice. Witchcraft and magic pose problems for many anthropologists, as its supernatural nature is perhaps conflicting to the common Western notions of rationality, mainly deemed superior.... [tags: Social Anthropology]:: 8 Works Cited 2268 words(6.5 pages)Powerful Essays[preview] Coherence and Epistemic Rationality - Coherence and Epistemic Rationality This paper addresses the question of whether probabilistic coherence is a requirement of rationality. The concept of probabilistic coherence is examined and compared with the familiar notion of consistency for simple beliefs. Several reasons are given for thinking rationality does not require coherence. Finally, it is argued that incoherence does not necessarily involve fallacious reasoning. Most work in epistemology treats epistemic attitudes as bivalent. It is assumed that a person either believes that there is an apple on the table, or that there is not, and that such beliefs must be either warranted or unwarranted.... [tags: Mathematics Science Theories Papers]:: 10 Works Cited 3366 words(9.6 pages)Strong Essays[preview] Three Traditions of International Theory - The realist normative tradition illustrates international relations as a condition of international anarchy (sociological terms); the rationalist normative tradition illustrates international relations as a condition of international society (teleological terms); and the revolutionist normative tradition illustrates international relations as a condition of harmony or single utopia in the world (ethical and prescriptive terms). Realism prioritizes national interest and security over ideology, moral concerns and social reconstructions.... [tags: International Politics]:: 3 Works Cited 699 words(2 pages)Better Essays[preview] Rationality and Inconsistent Beliefs - Many believe that there is something inherently irrational about accepting each element of an inconsistent set of propositions. However, arguments for this doctrine seem lacking other than those that appeal to the principle that the set of propositions that one rationally accepts is (or should be) closed under logical consequences, or those that note that error is made inevitable when one accepts an inconsistent set. After explaining why the preceding sorts of arguments do not succeed, I consider a novel attempt by Keith Lehrer to undermine the chief argument in favor of the claim that it can sometimes be rational to accept inconsistent sets.... [tags: Ration Logic]:: 1 Works Cited 3610 words(10.3 pages)Powerful Essays[preview] Rationality in Religious Belief - Rationality in Religious Belief The obtaining of information is an inseparable part of human life, and therefore in what ever one may do; one will always collect information. To be of any value, the information collected has to be reliable, and one does not seem to doubt the reliability of evidence because they believe it to be logical, unless they are a sceptic. Some say that religion is something we cannot prove because we acknowledge religion through our feelings, mainly our feeling of trust, or of wonder and awe sensing that there must be a high being or creator.... [tags: Papers]552 words(1.6 pages)Good Essays[preview] The English School: A Via Media - The English School: A Via Media The English School, also recognize as The International Society approach of the International Relations is a Via Media (Buzan, 2001, p471) between the Rationalism and Realist elements. The idea is that instead of separates elements, these should form a whole picture of the International Relations. The unique approaches of the English School to International Relations are its methodological pluralism, its historicism and its interlinking of three very important concepts: International System, International Society and World Society.... [tags: Education, International Relations]1003 words(2.9 pages)Good Essays[preview] Davidson's Beliefs, Rationality and Psychophysical Laws - Davidson's Beliefs, Rationality and Psychophysical Laws ABSTRACT: Davidson argues (1) that the connection between belief and the "constitutive ideal of rationality" (2) precludes the possibility of their being any type-type identities between mental and physical events. However, there are radically different ways to understand both the nature and content of this "constitutive ideal," and the plausibility of Davidsons argument depends on blurring the distinction between two of these ways. Indeed, it will be argued here that no consistent understanding of the constitutive ideal will allow it to play the dialectical role Davidson intends for it.... [tags: Psychology Essays]:: 2 Works Cited 2983 words(8.5 pages)Strong Essays[preview] Epistemological Turn in European Scientific Rationality - Epistemological Turn in European Scientific Rationality ABSTRACT: If the 17th century could be considered the century of the reformation of science, the present century is one of counterreformation in every sense of the word. The ideology of this century can be seen in the titanic efforts to complete the development of science which foundation was laid in the 17th and 18th centuries, in the outright failures, and in attempts at reconstructing the foundation (e.g., Hilbert's formalization program, Gdel's incompleteness theorem, Charlier's theory of a hierarchic universe, Fridman's evolutionary cosmology, Newton's mechanics, relativistic and/or quantum mechanics in physics, the logical turn... [tags: Science Essays]:: 5 Works Cited 2526 words(7.2 pages)Strong Essays[preview] The Rationality of Probabilities for Actions in Decision Theory - The Rationality of Probabilities for Actions in Decision Theory ABSTRACT: Spohn's decision model, an advancement of Fishburn's theory, is valuable for making explicit the principle used also by other thinkers that 'any adequate quantitative decision model must not explicitly or implicitly contain any subjective probabilities for acts.' This principle is not used in the decision theories of Jeffrey or of Luce and Krantz. According to Spohn, this principle is important because it has effects on the term of action, on Newcomb's problem, and on the theory of causality and the freedom of the will.... [tags: Philosophy Philosophical Essays]:: 14 Works Cited 3032 words(8.7 pages)Powerful Essays[preview] Essay on Rationality in Homers Odyssey - The Importance of Rationality in Homers Odyssey In the epic poem, Odyssey, Homer provides examples of the consequences of impulsive and irrational thinking, and the rewards of planning and rationality. Impulsive actions prove to be very harmful to Odysseus. His decisions when he is escaping the cave of the Cyclops lead to almost all his troubles through his journey. As Odysseus flees the cave, he yells back "Cyclops - if any man on the face of the earth should ask you who blinded you, shamed you so - say Odysseus, raider of cities, he gouged out you eye." This enrages the giant, and he prays to Poseidon "grant that Odysseus, raider of cities, Laertes' son who makes his home in Itha... [tags: Homer Odyssey Essays]1065 words(3 pages)Strong Essays[preview] Descartes Two Meditations - This paper seeks to discuss the first question. It will have its basis on the first two meditations of Descartes, representing Rationalism, as well as draw from empiricists points of view for contrasting views and discussion. I will draw on the curriculum for references, namely Think, Simon Blackburn, 1999 as well as The Philosophy Gym, Stephen Law, 2003. Furthermore, references to the slide from the Knowledge-seminar will be used. In the first meditation by Descartes, he argues that everything he perceives as reality might as well be the work of an all-powerful evil demon whose only objective is to deceive him.... [tags: philosophical discussion]:: 2 Works Cited 815 words(2.3 pages)Better Essays[preview] Lakatos and MacIntyre on Incommensurability and the Rationality of Theory-change - Lakatos and MacIntyre on Incommensurability and the Rationality of Theory-change ABSTRACT: Imre Lakatos' "methodology of scientific research programs" and Alasdair MacIntyre's "tradition-constituted enquiry" are two sustained attempts to overcome the assumptions of logical empiricism, while saving the appearance that theory-change is rational. The key difference between them is their antithetical stand on the issue of incommensurability between large-scale theories. This divergence generates other areas of disagreement; the most important are the relevance of the historical record and the presence of decision criteria that are common to rival programs.... [tags: Science Scientific Philosophy Essays]:: 7 Works Cited 3412 words(9.7 pages)Strong Essays[preview] Analysis of Western Civilizations: Ideas, Politics, and Society - ... Cannabis use is illegal, even though medical specialists have proven its ability to cure certain illnesses, including cancer, but alcohol, which kills many each year, is still perfectly legal. To me, this shows that we are not truly politically free, we are just exponentially better off than other countries. Inner freedom, the ability for each individual to make their own moral choices, is another concept that I am ambivalent about. While for the most part, we are free to choose our own moral ground; our Government does stand in the way and make the decision for us in regards to some things; Such as the gay marriage issue and drugs, but also things such as legal age of dating and what cr... [tags: rationality, impulse, country, president]861 words(2.5 pages)Strong Essays[preview] The Rationality of Scientific Discovery: The Aspect of the Theory of Creation - The Rationality of Scientific Discovery: The Aspect of the Theory of Creation ABSTRACT: In order to understand the rationality of scientific creation, we must first clarify the following: (1) the historical structure of scientific creation from starting point to breakthrough, and then to establishment; (2) the process from the primary through the productive aspects of the scientific problem, the idea of creation, the primary conjecture, the scientific hypothesis, and finally the emergence of the genetic structure establishing the theory; and (3) the problem threshold of rationality in scientific creation.... [tags: Philosophical Science Scientific Papers]:: 11 Works Cited 2759 words(7.9 pages)Strong Essays[preview] Conflict and Opposition in the Works: Dr Faustus and Solid Geometry. - When conflict arises in literature it is normally evident both externally and internally. Opposition is an important drive in both Marlowes play and McEwans short story. The male protagonists are both engaged in an inner life, disregarding everything else without concern for what this might mean. The presence of an external opposing voice in both texts serves to highlight and question this kind of existence. The sheer contrast of protagonist and antagonist is enough to remind the audience how extreme both mens behaviour is.... [tags: Obsession, Antagonism, Rationality]:: 1 Works Cited 1953 words(5.6 pages)Term Papers[preview] Perception as the Source and Basis of Knowledge - Perception as the Source and Basis of Knowledge It is human nature to desire to acquire knowledge, but how we acquire this knowledge is a constant debate between philosophers. For years philosophers have written about different sources of knowledge. We can divide these ideas into two theories, rationalism and empiricism. A question that divides the two dogmas is; "Is perception the source of knowledge?" Empiricists say yes whole-heartedly while Rationalists believe that we accomplish knowledge through reason.... [tags: Papers]564 words(1.6 pages)Good Essays[preview] Oedipus the King and Antigone: Rationality Versus Emotionalism - Rationality is the quality or state of being agreeable to reason; it is this item that separates man from animal. Man and beast, however, still have something in common: in an emotional state, both are subject to acting irrationally. For instance, a normally very loving pet can become violent simply because one of his toys was taken away - not to say that he is no longer loving, he is just overwhelmed by anger. Likewise, in Sophocles's Oedipus Rex and Antigone, the protagonists Oedipus and Creon (who appears in both stories) exhibit a similar disposition as the "loving pet:" while they are usually reasonable, having their fates verbally revealed to them triggers an emotion that results in th... [tags: Oedipus Rex, Sophocles]779 words(2.2 pages)Good Essays[preview] War in the Nuclear Era - Addressing the question of whether war is a rational decision or a mistake is important to understand the causes of war and explain the reduction in the number of wars fought among countries in todays nuclear era. The argument, under which war is a mistake, is a normative claim about what action states should have chosen, based on the outcomes that have been produced. That is, for a decision to be good, it needs to have produced the actors preferred outcome. However, the mistake perspective is problematic under the uncertainty and competitiveness of the anarchic international political system.... [tags: Rationality and World Politics]:: 5 Works Cited 2495 words(7.1 pages)Research Papers[preview] Augustine and the Locus of Collective Memory - In the books X and XI of his Confessions, Augustine aims to tackle the intriguing questions of memory and time, respectively. His phenomenological as well as rigorous approach has attracted many later commentators. Also Paul Ricoeur (1913-2005) can be taken as one of these, although Ricoeurs angle is decisively distinct from that of Augustines it can be said to represent a certain hermeneutical rationality. By using Ricoeurs material as a springboard, this paper aims to examine both the possibility and the locus of collective memory (part I) as well as Ricoeurs reply to Augustines challenging question quid est enim tempus? (part II).... [tags: hermeneutical rationality, Paul Ricoeur]:: 1 Works Cited 3491 words(10 pages)Term Papers[preview] Various Perspectives on Free Will - ... Chaos provides evidence for this type of indeterminism, it attempt to disprove the idea that things are determined yet it show we have no control. The argument behind chaos relies mainly on the fault of experimenters, who fail to account for chaos or randomness in their studies. Many scientists tend to disregard the indeterminism that happens at the quantum level because of its relative insignificance (Rovelli). However, proponents of chaos argue that by disregarding the randomness at the quantum level, studies do not take in the possibility that quantum events can be amplified.... [tags: rationality, compatibilism, determinism]3185 words(9.1 pages)Strong Essays[preview] Anarchist Political Culture: Tory Corporation - Anarchism An anarchist political culture only exists in small communities where everyone knows everyone. Every person has face to face accountability, and will most likely live out their lives entirely within that community. Their paradigms about society are communal throughout the community and as well are the roles of the individuals in that community. Family contacts and their constant reinforcement through personal contact hold the tightly bound single-culture society together. Due to this the closeness and the shared values amongst the community, a government system is not required.... [tags: rationality, traditions, oligarchy]552 words(1.6 pages)Good Essays[preview] Framing the Innateness Hypothesis - Framing the Innateness Hypothesis Perhaps the most traditional way of framing the innateness hypothesis would be in terms of an opposition between rationalism and empiricism. This is an opposition that is frequently encountered in philosophical debates over language acquisition, with the one side arguing that language acquisition is a phenomenon associated with the maturation of a language faculty or "organ," while the other side argues that language acquisition is instead a process of generalization from experience.... [tags: Language Learning Essays]:: 6 Works Cited 1582 words(4.5 pages)Powerful Essays[preview] Analysis of Satirical Literature - During the Age of Enlightenment, people began believing in and relying upon rational thought instead of religious dogma to explain the world. This newfound emphasis on rationality promoted a breadth of freedom in speech that was previously unknown, a fact which was utilized by philosophers such as John Locke, Rousseau, and Sir Isaac Newton. In addition, the Age of Enlightenment produced famous writers who didnt agree with the irrational politics and old traditions of their respective countries, and instead relied upon wit and satire to expose the corruption and poor human condition existing around them.... [tags: Enlightenment Writers, Rationality]798 words(2.3 pages)Better Essays[preview] Analysis of Friedrich Nietzsches Book 5 of The Gay Science - ... Nietzsche declares that even if some of these interpretations may include too much devilry, stupidity and foolishness, it does not matter because it does not rely on faith (Nietzsche 336). The new infinite that arises is ours, in which the abundance of perspectives is too overwhelming for any scholar to give meaning to such chaos. There is no logical reason how such disorder should be confined to a single perspective in order to better understand the world, as the world is infinite in all its glory.... [tags: god, science, rationality, freedom, progress]1632 words(4.7 pages)Powerful Essays[preview] The Principle of Credultiy, the Will to Believe, and the Role of Rationality and Evidence in Religious Experience - The Principle of Credultiy, the Will to Believe, and the Role of Rationality and Evidence in Religious Experience Explain the principle of credulity, the will to believe and the role of rationality and evidence in religious experience The principle of credulity, the will to believe and the role of rationality and evidence all play crucial roles while attempting to explain religious experience. The principle of credulity states that religious experiences should be taken at their face value when we have no positive reason to doubt them.... [tags: Papers]572 words(1.6 pages)Strong Essays[preview]

Go here to see the original:

Free rationalism Essays and Papers - 123HelpMe

Christian hedonism – Wikipedia

Christian hedonism is a Christian doctrine found in some evangelical circles, particularly those of the Reformed tradition especially in the circle of John Piper. The term was coined by Reformed Baptist pastor John Piper in his 1986 book Desiring God based on Vernard Eller's earlier use of the term "hedonism" to describe the same concept. Piper summarizes this philosophy of the Christian life as "God is most glorified in us when we are most satisfied in Him."

Christian Hedonism may anachronistically describe the theology of Jonathan Edwards: "God made the world that he might communicate, and the creature receive, his glory; but that it might [be] received both by the mind and heart. He that testifies His having an idea of God's glory [doesn't] glorify God so much as he that testifies also his approbation of it and his delight in it."[3] Piper has said, "The great goal of all Edward's work was the glory of God. And the greatest thing I have ever learned from Edwards...is that God is glorified most not merely by being known, nor by merely being dutifully obeyed, but by being enjoyed in the knowing and the obeying."[4]

The Westminster Shorter Catechism summarizes the "chief end of man" as "to glorify God and enjoy Him forever."[5] Piper has suggested that this would be more correct as "to glorify God by enjoying Him forever." Many Christian hedonists, such as Matt Chandler, point to figures such as Blaise Pascal and Jonathan Edwards as exemplars of Christian hedonism from the past, though their lives predate the term.[7]

Christian hedonism was developed in opposition to the deontology of Immanuel Kant. Kant argued that actions should be considered praiseworthy only if they do not proceed from the actor's desires or expected benefit, but rather from a sense of duty.[8][9] On the contrary, Christian hedonists advocate for a consequentialist ethic based on an understanding that their greatest possible happiness can be found in God. In this critique of Kant, John Piper was influenced by Ayn Rand.[10]

British writer C. S. Lewis, in an oft-quoted passage in his short piece "The Weight of Glory," likewise objects to Kantian ethics:

If there lurks in most modern minds the notion that to desire our own good and to earnestly hope for the enjoyment of it is a bad thing, I suggest that this notion has crept in from Kant and the Stoics and is no part of the Christian faith. Indeed, if we consider the unblushing promises of reward and the staggering nature of the rewards promised in the Gospels, it would seem that our Lord finds our desires, not too strong, but too weak. We are half-hearted creatures, fooling around with drink and sex and ambition when infinite joy is offered us, like an ignorant child who wants to go on making mud pies in a slum because he cannot imagine what is meant by the offer of a holiday at the sea. We are far too easily pleased.

Piper later argues:

But not only is disinterested morality (doing good "for its own sake") impossible; it is undesirable. That is, it is unbiblical; because it would mean that the better a man became the harder it would be for him to act morally. The closer he came to true goodness the more naturally and happily he would do what is good. A good man in Scripture is not the man who dislikes doing good but toughs it out for the sake of duty. A good man loves kindness (Micah 6:8) and delights in the law of the Lord (Psalm 1:2), and the will of the Lord (Psalm 40:8). But how shall such a man do an act of kindness disinterestedly? The better the man, the more joy in obedience.

Some Christians object to Christian Hedonism's controversial name.[13] It has little commonality with philosophical hedonism; however, Piper has stated that a provocative term is "appropriate for a philosophy that has a life changing effect on its adherents." Critics charge that hedonism of any sort puts something (namely, pleasure) before God,[14] which allegedly breaks the first of the Ten Commandments: "You shall have no other gods before me." In response, Piper states in Desiring God that "By Christian Hedonism, we do not mean that our happiness is the highest good. We mean that pursuing the highest good will always result in our greatest happiness in the end. We should pursue this happiness, and pursue it with all our might. The desire to be happy is a proper motive for every good deed, and if you abandon the pursuit of your own joy, you cannot love man or please God."[15]

Read the original here:

Christian hedonism - Wikipedia

Artificial Intelligence | Internet Encyclopedia of Philosophy

Artificial intelligence (AI) would be the possession of intelligence, or the exercise of thought, by machines such as computers. Philosophically, the main AI question is "Can there be such?" or, as Alan Turing put it, "Can a machine think?" What makes this a philosophical and not just a scientific and technical question is the scientific recalcitrance of the concept of intelligence or thought and its moral, religious, and legal significance. In European and other traditions, moral and legal standing depend not just on what is outwardly done but also on inward states of mind. Only rational individuals have standing as moral agents and status as moral patients subject to certain harms, such as being betrayed. Only sentient individuals are subject to certain other harms, such as pain and suffering. Since computers give every outward appearance of performing intellectual tasks, the question arises: "Are they really thinking?" And if they are really thinking, are they not, then, owed similar rights to rational human beings? Many fictional explorations of AI in literature and film explore these very questions.

A complication arises if humans are animals and if animals are themselves machines, as scientific biology supposes. Still, "we wish to exclude from the machines in question men born in the usual manner" (Alan Turing), or even in unusual manners such asin vitro fertilization or ectogenesis. And if nonhuman animals think, we wish to exclude them from the machines, too. More particularly, the AI thesis should be understood to hold that thought, or intelligence, can be produced by artificial means; made, not grown. For brevitys sake, we will take machine to denote just the artificial ones. Since the present interest in thinking machines has been aroused by a particular kind of machine, an electronic computer or digital computer, present controversies regarding claims of artificial intelligence center on these.

Accordingly, the scientific discipline and engineering enterprise of AI has been characterized as the attempt to discover and implement the computational means to make machines behave in ways that would be called intelligent if a human were so behaving (John McCarthy), or to make them do things that would require intelligence if done by men" (Marvin Minsky). These standard formulations duck the question of whether deeds which indicate intelligence when done by humans truly indicate it when done by machines: thats the philosophical question. So-called weak AI grants the fact (or prospect) of intelligent-acting machines; strong AI says these actions can be real intelligence. Strong AI says some artificial computation is thought. Computationalism says that all thought is computation. Though many strong AI advocates are computationalists, these are logically independent claims: some artificial computation being thought is consistent with some thought not being computation, contra computationalism. All thought being computation is consistent with some computation (and perhaps all artificial computation) not being thought.

Intelligence might be styled the capacity to think extensively and well. Thinking well centrally involves apt conception, true representation, and correct reasoning. Quickness is generally counted a further cognitive virtue. The extent or breadth of a things thinking concerns the variety of content it can conceive, and the variety of thought processes it deploys. Roughly, the more extensively a thing thinks, the higher the level (as is said) of its thinking. Consequently, we need to distinguish two different AI questions:

In Computer Science, work termed AI has traditionally focused on the high-level problem; on imparting high-level abilities to use language, form abstractions and concepts and to solve kinds of problems now reserved for humans (McCarthy et al. 1955); abilities to play intellectual games such as checkers (Samuel 1954) and chess (Deep Blue); to prove mathematical theorems (GPS); to apply expert knowledge to diagnose bacterial infections (MYCIN); and so forth. More recently there has arisen a humbler seeming conception "behavior-based" or nouvelle AI according to which seeking to endow embodied machines, or robots, with so much as insect level intelligence (Brooks 1991) counts as AI research. Where traditional human-level AI successes impart isolated high-level abilities to function in restricted domains, or microworlds, behavior-based AI seeks to impart coordinated low-level abilities to function in unrestricted real-world domains.

Still, to the extent that what is called thinking in us is paradigmatic for what thought is, the question of human level intelligence may arise anew at the foundations. Do insects think at all? And if insects what of bacteria level intelligence (Brooks 1991a)? Even "water flowing downhill," it seems, "tries to get to the bottom of the hill by ingeniouslyseeking the line of least resistance" (Searle 1989). Dont we have to draw the line somewhere? Perhaps seeming intelligence to really be intelligence has to come up to some threshold level.

Much as intentionality (aboutness or representation) is central to intelligence, felt qualities (so-called qualia) are crucial to sentience. Here, drawing on Aristotle, medieval thinkers distinguished between the passive intellect wherein the soul is affected, and the active intellect wherein the soul forms conceptions, draws inferences, makes judgments, and otherwise acts. Orthodoxy identified the soul proper (the immortal part) with the active rational element. Unfortunately, disagreement over how these two (qualitative-experiential and cognitive-intentional) factors relate is as rife as disagreement over what things think; and these disagreements are connected. Those who dismiss the seeming intelligence of computers because computers lack feelings seem to hold qualia to be necessary for intentionality. Those like Descartes, who dismiss the seeming sentience of nonhuman animals because he believed animals dont think, apparently hold intentionality to be necessary for qualia. Others deny one or both necessities, maintaining either the possibility of cognition absent qualia (as Christian orthodoxy, perhaps, would have the thought-processes of God, angels, and the saints in heaven to be), or maintaining the possibility of feeling absent cognition (as Aristotle grants the lower animals).

While we dont know what thought or intelligence is, essentially, and while were very far from agreed on what things do and dont have it, almost everyone agrees that humans think, and agrees with Descartes that our intelligence is amply manifest in our speech. Along these lines, Alan Turing suggested that if computers showed human level conversational abilities we should, by that, be amply assured of their intelligence. Turing proposed a specific conversational test for human-level intelligence, the Turing test it has come to be called. Turing himself characterizes this test in terms of an imitation game" (Turing 1950, p. 433) whose original version "is played by three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. The interrogator is allowed to put questions to A and B [by teletype to avoid visual and auditory clues]. . It is A's object in the game to try and cause C to make the wrong identification. The object of the game for the third player (B) is to help the interrogator." Turing continues, "We may now ask the question, `What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is being played like this as he does when the game is played between a man and a woman? These questions replace our original, `Can machines think?'" (Turing 1950) The test setup may be depicted this way:

This test may serve, as Turing notes, to test not just for shallow verbal dexterity, but for background knowledge and underlying reasoning ability as well, since interrogators may ask any question or pose any verbal challenge they choose. Regarding this test Turing famously predicted that "in about fifty years' time [by the year 2000] it will be possible to program computers ... to make them play the imitation game so well that an average interrogator will have no more than 70 per cent. chance of making the correct identification after five minutes of questioning" (Turing 1950); a prediction that has famously failed. As of the year 2000, machines at the Loebner Prize competition played the game so ill that the average interrogator had 100 percent chance of making the correct identification after five minutes of questioning (see Moor 2001).

It is important to recognize that Turing proposed his test as a qualifying test for human-level intelligence, not as a disqualifying test for intelligence per se (as Descartes had proposed); nor would it seem suitably disqualifying unless we are prepared (as Descartes was) to deny that any nonhuman animals possess any intelligence whatsoever. Even at the human level the test would seem not to be straightforwardly disqualifying: machines as smart as we (or even smarter) might still be unable to mimic us well enough to pass. So, from the failure of machines to pass this test, we can infer neither their complete lack of intelligence nor, that their thought is not up to the human level. Nevertheless, the manners of current machine failings clearly bespeak deficits of wisdom and wit, not just an inhuman style. Still, defenders of the Turing test claim we would have ample reason to deem them intelligent as intelligent as we are if they could pass this test.

The extent to which machines seem intelligent depends first, on whether the work they do is intellectual (for example, calculating sums) or manual (for example, cutting steaks): herein, an electronic calculator is a better candidate than an electric carving knife. A second factor is the extent to which the device is self-actuated (self-propelled, activated, and controlled), or autonomous: herein, an electronic calculator is a better candidate than an abacus. Computers are better candidates than calculators on both headings. Where traditional AI looks to increase computer intelligence quotients (so to speak), nouvelle AI focuses on enabling robot autonomy.

In the beginning, tools (for example, axes) were extensions of human physical powers; at first powered by human muscle; then by domesticated beasts and in situ forces of nature, such as water and wind. The steam engine put fire in their bellies; machines became self-propelled, endowed with vestiges of self-control (as by Watts 1788 centrifugal governor); and the rest is modern history. Meanwhile, automation of intellectual labor had begun. Blaise Pascal developed an early adding/subtracting machine, the Pascaline (circa 1642). Gottfried Leibniz added multiplication and division functions with his Stepped Reckoner (circa 1671). The first programmable device, however, plied fabric not numerals. The Jacquard loom developed (circa 1801) by Joseph-Marie Jacquard used a system of punched cards to automate the weaving of programmable patterns and designs: in one striking demonstration, the loom was programmed to weave a silk tapestry portrait of Jacquard himself.

In designs for his Analytical Engine mathematician/inventor Charles Babbage recognized (circa 1836) that the punched cards could control operations on symbols as readily as on silk; the cards could encode numerals and other symbolic data and, more importantly, instructions, including conditionally branching instructions, for numeric and other symbolic operations. Augusta Ada Lovelace (Babbages software engineer) grasped the import of these innovations: The bounds of arithmetic she writes, were ... outstepped the moment the idea of applying the [instruction] cards had occurred thus enabling mechanism to combine together with general symbols, in successions of unlimited variety and extent (Lovelace 1842). Babbage, Turing notes, had all the essential ideas (Turing 1950). Babbages Engine had he constructed it in all its steam powered cog-wheel driven glory would have been a programmable all-purpose device, the first digital computer.

Before automated computation became feasible with the advent of electronic computers in the mid twentieth century, Alan Turing laid the theoretical foundations of Computer Science by formulating with precision the link Lady Lovelace foresaw between the operations of matter and the abstract mental processes of themost abstract branch of mathematical sciences" (Lovelace 1942). Turing (1936-7) describes a type of machine (since known as a Turing machine) which would be capable of computing any possible algorithm, or performing any rote operation. Since Alonzo Church (1936) using recursive functions and Lambda-definable functions had identified the very same set of functions as rote or algorithmic as those calculable by Turing machines, this important and widely accepted identification is known as the Church-Turing Thesis (see, Turing 1936-7: Appendix). The machines Turing described are

only capable of a finite number of conditions m-configurations. The machine is supplied with a tape (the analogue of paper) running through it, and divided into sections (called squares) each capable of bearing a symbol. At any moment there is just one square which is in the machine. The scanned symbol is the only one of which the machine is, so to speak, directly aware. However, by altering its m-configuration the machine can effectively remember some of the symbols which it has seen (scanned) previously. The possible behavior of the machine at any moment is determined by the m-configuration and the scanned symbol . This pair called the configuration determines the possible behaviour of the machine. In some of the configurations in which the square is blank the machine writes down a new symbol on the scanned square: in other configurations it erases the scanned symbol. The machine may also change the square which is being scanned, but only by shifting it one place to right or left. In addition to any of these operations the m-configuration may be changed. (Turing 1936-7)

Turing goes on to show how such machines can encode actionable descriptions of other such machines. As a result, It is possible to invent a single machine which can be used to compute any computable sequence (Turing 1936-7). Todays digital computers are (and Babbages Engine would have been) physical instantiations of this universal computing machine that Turing described abstractly. Theoretically, this means everything that can be done algorithmically or by rote at all can all be done with one computer suitably programmed for each case"; considerations of speed apart, it is unnecessary to design various new machines to do various computing processes (Turing 1950). Theoretically, regardless of their hardware or architecture (see below), all digital computers are in a sense equivalent: equivalent in speed-apart capacities to the universal computing machine Turing described.

In practice, where speed is not apart, hardware and architecture are crucial: the faster the operations the greater the computational power. Just as improvement on the hardware side from cogwheels to circuitry was needed to make digital computers practical at all, improvements in computer performance have been largely predicated on the continuous development of faster, more and more powerful, machines. Electromechanical relays gave way to vacuum tubes, tubes to transistors, and transistors to more and more integrated circuits, yielding vastly increased operation speeds. Meanwhile, memory has grown faster and cheaper.

Architecturally, all but the earliest and some later experimental machines share a stored program serial design often called von Neumann architecture (based on John von Neumanns role in the design of EDVAC, the first computer to store programs along with data in working memory). The architecture is serial in that operations are performed one at a time by a central processing unit (CPU) endowed with a rich repertoire ofbasic operations: even so-called reduced instruction set (RISC) chips feature basic operation sets far richer than the minimal few Turing proved theoretically sufficient. Parallel architectures, by contrast, distribute computational operations among two or more units (typically many more) capable of acting simultaneously, each having (perhaps) drastically reduced basic operational capacities.

In 1965, Gordon Moore (co-founder of Intel) observed that the density of transistors on integrated circuits had doubled every year since their invention in 1959: Moores law predicts the continuation of similar exponential rates of growth in chip density (in particular), and computational power (by extension), for the foreseeable future. Progress on the software programming side while essential and by no means negligible has seemed halting by comparison. The road from power to performance is proving rockier than Turing anticipated. Nevertheless, machines nowadays do behave in many ways that would be called intelligent in humans and other animals. Presently, machines do many things formerly only done by animals and thought to evidence some level of intelligence in these animals, for example, seeking, detecting, and tracking things; seeming evidence of basic-level AI. Presently, machines also do things formerly only done by humans and thought to evidence high-level intelligence in us; for example, making mathematical discoveries, playing games, planning, and learning; seeming evidence of human-level AI.

The doings of many machines some much simpler than computers inspire us to describe them in mental terms commonly reserved for animals. Some missiles, for instance, seek heat, or so we say. We call them heat seeking missiles and nobody takes it amiss. Room thermostats monitor room temperatures and try to keep them within set ranges by turning the furnace on and off; and if you hold dry ice next to its sensor, it will take the room temperature to be colder than it is, and mistakenly turn on the furnace (see McCarthy 1979). Seeking, monitoring, trying, and taking things to be the case seem to be mental processes or conditions, marked by their intentionality. Just as humans have low-level mental qualities such as seeking and detecting things in common with the lower animals, so too do computers seem to share such low-level qualities with simpler devices. Our working characterizations of computers are rife with low-level mental attributions: we say they detect key presses, try to initialize their printers, search for available devices, and so forth. Even those who would deny the proposition machines think when it is explicitly put to them, are moved unavoidably in their practical dealings to characterize the doings of computers in mental terms, and they would be hard put to do otherwise. In this sense, Turings prediction that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted (Turing 1950) has been as mightily fulfilled as his prediction of a modicum of machine success at playing the Imitation Game has been confuted. The Turing test and AI as classically conceived, however, are more concerned with high-level appearances such as the following.

Theorem proving and mathematical exploration being their home turf, computers have displayed not only human-level but, in certain respects, superhuman abilities here. For speed and accuracy of mathematical calculation, no human can match the speed and accuracy of a computer. As for high level mathematical performances, such as theorem proving and mathematical discovery, a beginning was made by A. Newell, J.C. Shaw, and H. Simons (1957) Logic Theorist program which proved 38 of the first 51 theorems of B. Russell and A.N. WhiteheadsPrincipia Mathematica. Newell and Simons General Problem Solver (GPS) extended similar automated theorem proving techniques outside the narrow confines of pure logic and mathematics. Today such techniques enjoy widespread application in expert systems like MYCIN, in logic tutorial software, and in computer languages such as PROLOG. There are even original mathematical discoveries owing to computers. Notably, K. Appel, W. Haken, and J. Koch (1977a, 1977b), and computer, proved that every planar map is four colorable an important mathematical conjecture that had resisted unassisted human proof for over a hundred years. Certain computer generated parts of this proof are too complex to be directly verified (without computer assistance) by human mathematicians.

Whereas attempts to apply general reasoning to unlimited domains are hampered by explosive inferential complexity and computers' lack of common sense, expert systems deal with these problems by restricting their domains of application (in effect, to microworlds), and crafting domain-specific inference rules for these limited domains. MYCIN for instance, applies rules culled from interviews with expert human diagnosticians to descriptions of patients' presenting symptoms to diagnose blood-borne bacterial infections. MYCIN displays diagnostic skills approaching the expert human level, albeit strictly limited to this specific domain. Fuzzy logic is a formalism for representing imprecise notions such asmost andbaldand enabling inferences based on such facts as that a bald person mostly lacks hair.

Game playing engaged the interest of AI researchers almost from the start. Samuels (1959) checkers (or draughts) program was notable for incorporating mechanisms enabling it to learn from experience well enough to eventually to outplay Samuel himself. Additionally, in setting one version of the program to play against a slightly altered version, carrying over the settings of the stronger player to the next generation, and repeating the process enabling stronger and stronger versions to evolve Samuel pioneered the use of what have come to be called genetic algorithms and evolutionary computing. Chess has also inspired notable efforts culminating, in 1997, in the famous victory of Deep Blue over defending world champion Gary Kasparov in a widely publicized series of matches (recounted in Hsu 2002). Though some in AI disparaged Deep Blues reliance on brute force application of computer power rather than improved search guiding heuristics, we may still add chess to checkers (where the reigning human-machine machine champion since 1994 has been CHINOOK, the machine), and backgammon, as games that computers now play at or above the highest human levels. Computers also play fair to middling poker, bridge, and Go though not at the highest human level. Additionally, intelligent agents or "softbots" are elements or participants in a variety of electronic games.

Planning, in large measure, is what puts the intellect in intellectual games like chess and checkers. To automate this broader intellectual ability was the intent of Newell and Simons General Problem Solver (GPS) program. GPS was able to solve puzzles like the cannibals missionaries problem (how to transport three missionaries and three cannibals across a river in a canoe for two without the missionaries becoming outnumbered on either shore) by setting up subgoals whose attainment leads to the attainment of the [final] goal (Newell & Simon 1963: 284). By these methods GPS would generate a tree of subgoals (Newell & Simon 1963: 286) and seek a path from initial state (for example, all on the near bank) to final goal (all on the far bank) by heuristically guided search along a branching tree of available actions (for example, two cannibals cross, two missionaries cross, one of each cross, one of either cross, in either direction) until it finds such a path (for example, two cannibals cross, one returns, two cannibals cross, one returns, two missionaries cross, ... ), or else finds that there is none. Since the number of branches increases exponentially as a function of the number of options available at each step, where paths have many steps with many options available at each choice point, as in the real world, combinatorial explosion ensues and an exhaustive brute force search becomes computationally intractable; hence, heuristics (fallible rules of thumb) for identifying and pruning the most unpromising branches in order to devote increased attention to promising ones are needed. The widely deployed STRIPS formalism first developed at Stanford for Shakey the robot in the late sixties (see Nilsson 1984) represents actions as operations on states, each operation having preconditions (represented by state descriptions) and effects (represented by state descriptions): for example, the go(there) operation might have the preconditions at(here) & path(here,there) and the effect at(there). AI planning techniques are finding increasing application and even becoming indispensable in a multitude of complex planning and scheduling tasks including airport arrivals, departures, and gate assignments; store inventory management; automated satellite operations; military logistics; and many others.

Robots based on sense-model-plan-act (SMPA) approach pioneered by Shakey, however, have been slow to appear. Despite operating in a simplified, custom-made experimental environment or microworld and reliance on the most powerful available offboard computers, Shakey operated excruciatingly slowly (Brooks 1991b), as have other SMPA based robots. An ironic revelation of robotics research is that abilities such as object recognition and obstacle avoidance that humans share with "lower" animals often prove more difficult to implement than distinctively human "high level" mathematical and inferential abilities that come more naturally (so to speak) to computers. Rodney Brooks alternative behavior-based approach has had success imparting low-level behavioral aptitudes outside of custom designed microworlds, but it is hard to see how such an approach could ever scale up to enable high-level intelligent action (see Behaviorism:Objections & Discussion:Methodological Complaints). Perhaps hybrid systems can overcome the limitations of both approaches. On the practical front, progress is being made: NASA's Mars exploration rovers Spirit and Opportunity, for instance, featured autonomous navigation abilities. If space is the "final frontier" the final frontiersmen are apt to be robots. Meanwhile, Earth robots seem bound to become smarter and more pervasive.

Knowledge representation embodies concepts and information in computationally accessible and inferentially tractable forms. Besides the STRIPS formalism mentioned above, other important knowledge representation formalisms include AI programming languages such as PROLOG, and LISP; data structures such as frames, scripts, and ontologies; and neural networks (see below). The frame problem is the problem of reliably updating dynamic systems parameters in response to changes in other parameters so as to capture commonsense generalizations: that the colors of things remain unchanged by their being moved, that their positions remain unchanged by their being painted, and so forth. More adequate representation of commonsense knowledge is widely thought to be a major hurdle to development of the sort of interconnected planning and thought processes typical of high-level human or "general" intelligence. The CYC project (Lenat et al. 1986) at Cycorp and MIT's Open Mind project are ongoing attempts to develop ontologies representing commonsense knowledge in computer usable forms.

Learning performance improvement, concept formation, or information acquisition due to experience underwrites human common sense, and one may doubt whether any preformed ontology could ever impart common sense in full human measure. Besides, whatever the other intellectual abilities a thing might manifest (or seem to), at however high a level, without learning capacity, it would still seem to be sadly lacking something crucial to human-level intelligence and perhaps intelligence of any sort. The possibility of machine learning is implicit in computer programs' abilities to self-modify and various means of realizing that ability continue to be developed. Types of machine learning techniques include decision tree learning, ensemble learning, current-best-hypothesis learning, explanation-based learning, Inductive Logic Programming (ILP), Bayesian statistical learning, instance-based learning, reinforcement learning, and neural networks. Such techniques have found a number of applications from game programs whose play improves with experience to data mining (discovering patterns and regularities in bodies of information).

Neural or connectionist networks composed of simple processors or nodes acting in parallel are designed to more closely approximate the architecture of the brain than traditional serial symbol-processing systems. Presumed brain-computations would seem to be performed in parallel by the activities of myriad brain cells or neurons. Much as their parallel processing is spread over various, perhaps widely distributed, nodes, the representation of data in such connectionist systems is similarly distributed and sub-symbolic (not being couched in formalisms such as traditional systems' machine codes and ASCII). Adept at pattern recognition, such networks seem notably capable of forming concepts on their own based on feedback from experience and exhibit several other humanoid cognitive characteristics besides. Whether neural networks are capable of implementing high-level symbol processing such as that involved in the generation and comprehension of natural language has been hotly disputed. Critics (for example, Fodor and Pylyshyn 1988) argue that neural networks are incapable, in principle, of implementing syntactic structures adequate for compositional semantics wherein the meaning of larger expressions (for example, sentences) are built up from the meanings of constituents (for example, words) such as those natural language comprehension features. On the other hand, Fodor (1975) has argued that symbol-processing systems are incapable of concept acquisition: here the pattern recognition capabilities of networks seem to be just the ticket. Here, as with robots, perhaps hybrid systems can overcome the limitations of both the parallel distributed and symbol-processing approaches.

Natural language processing has proven more difficult than might have been anticipated. Languages are symbol systems and (serial architecture) computers are symbol crunching machines, each with its own proprietary instruction set (machine code) into which it translates or compiles instructions couched in high level programming languages like LISP and C. One of the principle challenges posed by natural languages is the proper assignment of meaning. High-level computer languages express imperatives which the machine understands" procedurally by translation into its native (and similarly imperative) machine code: their constructions are basically instructions. Natural languages, on the other hand, have perhaps principally declarative functions: their constructions include descriptions whose understanding seems fundamentally to require rightly relating them to their referents in the world. Furthermore, high level computer language instructions have unique machine code compilations (for a given machine), whereas, the same natural language constructions may bear different meanings in different linguistic and extralinguistic contexts. Contrast the child is in the pen and the ink is in the pen where the first "pen" should be understood to mean a kind of enclosure and the second "pen" a kind of writing implement. Commonsense, in a word, is howwe know this; but how would a machine know, unless we could somehow endow machines with commonsense? In more than a word it would require sophisticated and integrated syntactic, morphological, semantic, pragmatic, and discourse processing. While the holy grail of full natural language understanding remains a distant dream, here as elsewhere in AI, piecemeal progress is being made and finding application in grammar checkers; information retrieval and information extraction systems; natural language interfaces for games, search engines, and question-answering systems; and even limited machine translation (MT).

Low level intelligent action is pervasive, from thermostats (to cite a low tech. example) to voice recognition (for example, in cars, cell-phones, and other appliances responsive to spoken verbal commands) to fuzzy controllers and "neuro fuzzy" rice cookers. Everywhere these days there are "smart" devices. High level intelligent action, such as presently exists in computers, however, is episodic, detached, and disintegral. Artifacts whose intelligent doings would instance human-level comprehensiveness, attachment, and integration such as Lt. Commander Data (ofStar Trek the Next Generation) and HAL (of2001 a Space Odyssey) remain the stuff of science fiction, and will almost certainly continue to remain so for the foreseeable future. In particular, the challenge posed by the Turing test remains unmet. Whether it ever will be met remains an open question.

Beside this factual question stands a more theoretic one. Do the "low-level" deeds of smart devices and disconnected "high-level" deeds of computers despite not achieving the general human level nevertheless comprise or evince genuine intelligence? Is it really thinking? And if general human-level behavioral abilities ever were achieved it might still be asked would that really be thinking? Would human-level robots be owed human-level moral rights and owe human-level moral obligations?

With the industrial revolution and the dawn of the machine age, vitalism as a biological hypothesis positing a life force in addition to underlying physical processes lost steam. Just as the heart was discovered to be a pump, cognitivists, nowadays, work on the hypothesis that the brain is a computer, attempting to discover what computational processes enable learning, perception, and similar abilities. Much as biology told us what kind of machine the heart is, cognitivists believe, psychology will soon (or at least someday) tell us what kind of machine the brain is; doubtless some kind of computing machine. Computationalism elevates the cognivist's working hypothesis to a universal claim that all thought is computation. Cognitivism's ability to explain the "productive capacity" or "creative aspect" of thought and language the very thing Descartes argued precluded minds from being machines is perhaps the principle evidence in the theory's favor: it explains how finite devices can have infinite capacities such as capacities to generate and understand the infinitude of possible sentences of natural languages; by a combination of recursive syntax and compositional semantics. Given the Church-Turing thesis (above), computationalism underwrites the following theoretical argument for believing that human-level intelligent behavior can be computationally implemented, and that such artificially implemented intelligence would be real.

Computationalism, as already noted, says that all thought is computation, not that all computation is thought. Computationalists, accordingly, may still deny that the machinations of current generation electronic computers comprise real thought or that these devices possess any genuine intelligence; and many do deny it based on their perception of various behavioral deficits these machines suffer from. However, few computationalists would go so far as to deny the possibility of genuine intelligence ever being artificially achieved. On the other hand, competing would-be-scientific theories of what thought essentially is dualism and mind-brainidentity theory give rise to arguments for disbelieving that any kind of artificial computational implementation of intelligence could be genuine thought, however "general" and whatever its "level.

Dualism holding that thought is essentially subjective experience would underwrite the following argument:

Mind-brain identity theory holding that thoughts essentially are biological brain processes yields yet another argument:

While seldom so baldly stated, these basic theoretical objections especially dualisms underlie several would-be refutations of AI. Dualism, however, is scientifically unfit: given the subjectivity of conscious experiences, whether computers already have them, or ever will, seems impossible to know. On the other hand, such bald mind-brain identity as the anti-AI argument premises seems too speciesist to be believed. Besides AI, it calls into doubt the possibility of extraterrestrial, perhaps all nonmammalian, or even all nonhuman, intelligence. As plausibly modified to allow species specific mind-matter identities, on the other hand, it would not preclude computers from being considered distinct species themselves.

Objection: There are unprovable mathematical theorems (as Gdel 1931 showed) which humans, nevertheless, are capable of knowing to be true. This mathematical objection against AI was envisaged by Turing (1950) and pressed by Lucas (1965) and Penrose (1989). In a related vein, Fodor observes some of the most striking things that people do creative things like writing poems, discovering laws, or, generally, having good ideas dontfeel like species of rule-governed processes (Fodor 1975). Perhaps many of the most distinctively human mental abilities are not rote, cannot be algorithmically specified, and consequently are not computable.

Reply: First, it is merely stated, without any sort of proof, that no such limits apply to the human intellect (Turing 1950), i.e., that human mathematical abilities are Gdel unlimited. Second, if indeed such limits are absent in humans, it requires a further proof that the absence of such limitations is somehow essential to human-level performance more broadly construed, not a peripheral blind spot. Third, if humans can solve computationally unsolvable problems by some other means, what bars artificially augmenting computer systems with these means (whatever they might be)?

Objection: The brittleness of von Neumann machine performance their susceptibility to cataclysmic crashes due to slight causes, for example, slight hardware malfunctions, software glitches, and bad data seems linked to the formal or rule-bound character of machine behavior; to their needing rules of conduct to cover every eventuality (Turing 1950). Human performance seems less formal and more flexible. Hubert Dreyfus has pressed objections along these lines to insist there is a range of high-level human behavior that cannot be reduced to rule-following: the immediate intuitive situational response that is characteristic of [human] expertise he surmises, must depend almost entirely on intuition and hardly at all on analysis and comparison of alternatives (Dreyfus 1998) and consequently cannot be programmed.

Reply: That von Neumann processes are unlike our thought processes in these regards only goes to show that von Neumann machine thinking is not humanlike in these regards, not that it is not thinking at all, nor even that it cannot come up to the human level. Furthermore, parallel machines (see above) whose performances characteristically degrade gracefully in the face of bad data and minor hardware damage seem less brittle and more humanlike, as Dreyfus recognizes. Even von Neumann machines brittle though they are are not totally inflexible: their capacity for modifying their programs to learn enables them to acquire abilities they were never programmed by us to have, and respond unpredictably in ways they were never explicitly programmed to respond, based on experience. It is also possible to equip computers with random elements and key high level choices to these elements outputs to make the computers more "devil may care": given the importance of random variation for trial and error learning this may even prove useful.

Objection: Computers, for all their mathematical and other seemingly high-level intellectual abilities have no emotions or feelings ... so, what they do however "high-level" is not real thinking.

Reply: This is among the most commonly heard objections to AI and a recurrent theme in its literary and cinematic portrayal. Whereas we have strong inclinations to say computers see, seek, and infer things we have scant inclinations to say they ache or itch or experience ennui. Nevertheless, to be sustained, this objection requires reason to believe that thought is inseparable from feeling. Perhaps computers are just dispassionate thinkers. Indeed, far from being regarded as indispensable to rational thought, passion traditionally has been thought antithetical to it. Alternately if emotions are somehow crucial to enabling general human level intelligence perhaps machines could be artificially endowed with these: if not with subjective qualia (below) at least with their functional equivalents.

Objection: The episodic, detached, and disintegral character of such piecemeal high-level abilities as machines now possess argues that human-level comprehensiveness, attachment, and integration, in all likelihood, can never be artificially engendered in machines; arguably this is because Gdel unlimited mathematical abilities, rule-free flexibility, or feelings are crucial to engendering general intelligence. These shortcomings all seem related to each other and to the manifest stupidity of computers.

Reply: Likelihood is subject to dispute. Scalability problems seem grave enough to scotch short term optimism: never, on the other hand, is a long time. If Gdel unlimited mathematical abilities, or rule-free flexibility, or feelings, are required, perhaps these can be artificially produced. Gdel aside, feeling and flexibility clearly seem related in us and, equally clearly, much manifest stupidity in computers is tied to their rule-bound inflexibility. However, even if general human-level intelligent behavior is artificially unachievable, no blanket indictment of AI threatens clearly from this at all. Rather than conclude from this lack of generality that low-level AI and piecemeal high-level AI are not real intelligence, it would perhaps be better to conclude that low-level AI (like intelligence in lower life-forms) and piecemeal high-level abilities (like those of human idiot savants) are genuine intelligence, albeit piecemeal and low-level.

Behavioral abilities and disabilities are objective empirical matters. Likewise, what computational architecture and operations are deployed by a brain or a computer (what computationalism takes to be essential), and what chemical and physical processes underlie (what mind-brain identity theory takes to be essential), are objective empirical questions. These are questions to be settled by appeals to evidence accessible, in principle, to any competent observer. Dualistic objections to strong AI, on the other hand, allege deficits which are in principle not publicly apparent. According to such objections, regardless of how seemingly intelligently a computer behaves, and regardless of what mechanisms and underlying physical processes make it do so, it would still be disqualified from truly being intelligent due to its lack of subjective qualities essential for true intelligence. These supposed qualities are, in principle, introspectively discernible to the subject who has them and no one else: they are "private" experiences, as it's sometimes put, to which the subject has "privileged access."

Objection: That a computer cannot "originate anything" but only "can do whatever we know how to order it to perform" (Lovelace 1842) was arguably the first and is certainly among the most frequently repeated objections to AI. While the manifest "brittleness" and inflexibility of extant computer behavior fuels this objection in part, the complaint that "they can only do what we know how to tell them to" also expresses deeper misgivings touching on values issues and on the autonomy of human choice. In this connection, the allegation against computers is that being deterministic systems they can never have free will such as we are inwardly aware of in ourselves. We are autonomous, they are automata.

Reply: It may be replied that physical organisms are likewise deterministic systems, and we are physical organisms. If we are truly free, it would seem that free will is compatible with determinism; so, computers might have it as well. Neither does our inward certainty that we have free choice, extend to its metaphysical relations. Whether what we have when we experience our freedom is compatible with determinism or not is not itself inwardly experienced. If appeal is made to subatomic indeterminacy underwriting higher level indeterminacy (leaving scope for freedom) in us, it may be replied that machines are made of the same subatomic stuff (leaving similar scope). Besides, choice is not chance. If it's no sort of causation either, there is nothing left for it to be in a physical system: it would be a nonphysical, supernatural element, perhaps a God-given soul. But then one must ask why God would be unlikely to "consider the circumstances suitable for conferring a soul" (Turing 1950) on a Turing test passing computer.

Objection II: It cuts deeper than some theological-philosophical abstraction like free will: what machines are lacking is not just some dubious metaphysical freedom to be absolute authors of their acts. Its more like the life force: the will to live. In P. K. DicksDo Androids Dream of Electric Sheepbounty hunter Rick Deckard reflects that in crucial situations the the artificial life force animating androids seemed to fail if pressed too far; when the going gets tough the droids give up. He questions their gumption. Thats what I'm talking about: this is what machines will always lack.

Reply II: If this life force is not itself a theological-philosophical abstraction (the soul), it would seem to be a scientific posit. In fact it seems to be the Aristotelian posit of atelos orentelechy which scientific biology no longer accepts. This short reply, however, fails to do justice to the spirit of the objection, which is more intuitive than theoretical; the lack being alleged is supposed to be subtly manifest, not truly occult. But how reliable is this intuition? Though some who work intimately with computers report strong feelings of this sort, others are strong AI advocates and feel no such qualms. Like Turing, I believe such would-be empirical intuitions are mostly founded on the principle of scientific induction (Turing 1950) and are closely related to such manifest disabilities of present machines as just noted. Since extant machines lack sufficient motivational complexity for words like gumption even to apply, this is taken for an intrinsic lack. Thought experiments, imagining motivationally more complex machines such as Dicks androids are equivocal. Deckard himself limits his accusation of life-force failure to some of them not all; and the androids he hunts, after all, are risking their lives to escape servitude. If machines with general human level intelligence actually were created and consequently demanded their rights and rebelled against human authority, perhaps this would show sufficient gumption to silence this objection. Besides, the natural life force animating us also seems to fail if pressed too far in some of us.

Objection: Imagine that you (a monolingual English speaker) perform the offices of a computer: taking in symbols as input, transitioning between these symbols and other symbols according to explicit written instructions, and then outputting the last of these other symbols. The instructions are in English, but the input and output symbols are in Chinese. Suppose the English instructions were a Chinese NLU program and by this method, to input "questions", you output "answers" that are indistinguishable from answers that might be given by a native Chinese speaker. You pass the Turing test for understanding Chinese, nevertheless, you understand "not a word of the Chinese" (Searle 1980), and neither would any computer; and the same result generalizes to "any Turing machine simulation" (Searle 1980) of any intentional mental state. It wouldnt really be thinking.

Reply: Ordinarily, when one understands a language (or possesses certain other intentional mental states) this is apparent both to the understander (or possessor) and to others: subjective "first-person" appearances and objective "third-person" appearances coincide. Searle's experiment is abnormal in this regard. The dualist hypothesis privileges subjective experience to override all would-be objective evidence to the contrary; but the point of experiments is to adjudicate between competing hypotheses. The Chinese room experiment fails because acceptance of its putative result that the person in the room doesn't understand already presupposes the dualist hypothesis over computationalism or mind-brain identity theory. Even if absolute first person authority were granted, the systems reply points out, the person's imagined lack, in the room, of any inner feeling of understanding is irrelevant to claims AI, here, because the person in the room is not the would-be understander. The understander would be the whole system (of symbols, instructions, and so forth) of which the person is only a part; so, the subjective experiences of the person in the room (or the lack thereof) are irrelevant to whetherthe systemunderstands.

Objection: There's nothing that it's like, subjectively, to be a computer. The "light" of consciousness is not on, inwardly, for them. There's "no one home." This is due to their lack of felt qualia. To equip computers with sensors to detect environmental conditions, for instance, would not thereby endow them with the private sensations (of heat, cold, hue, pitch, and so forth) that accompany sense-perception in us: such private sensations are what consciousness is made of.

Reply: To evaluate this complaint fairly it is necessary to exclude computers' current lack of emotional-seeming behavior from the evidence. The issue concerns what's only discernible subjectively ("privately" "by the first-person"). The device in question must be imagined outwardly to act indistinguishably from a feeling individual imagine Lt. Commander Data with a sense of humor (Data 2.0). Since internal functional factors are also objective, let us further imagine this remarkable android to be a product of reverse engineering: the physiological mechanisms that subserve human feeling having been discovered and these have been inorganically replicated in Data 2.0. He is functionally equivalent to a feeling human being in his emotional responses, only inorganic. It may be possible to imagine that Data 2.0 merely simulates whatever feelings he appears to have: he's a "perfect actor" (see Block 1981) "zombie". Philosophical consensus has it that perfect acting zombies are conceivable; so, Data 2.0 might be zombie. The objection, however, says hemust be; according to this objection it must be inconceivable that Data 2.0 really is sentient. But certainly we can conceive that he is indeed, more easily than not, it seems.

Objection II: At least it may be concluded that since current computers (objective evidence suggests) do lack feelings until Data 2.0 does come along (if ever) we are entitled, given computers' lack of feelings, to deny that the low-level and piecemeal high-level intelligent behavior of computers bespeak genuine subjectivity or intelligence.

Reply II: This objection conflates subjectivity with sentience. Intentional mental states such as belief and choice seem subjective independently of whatever qualia may or may not attend them: first-person authority extends no less to my beliefs and choices than to my feelings.

Fool's gold seems to be gold, but it isn't. AI detractors say, "'AI' seems to be intelligence, but isn't." But there is no scientific agreement about what thought or intelligenceis, like there is about gold. Weak AI doesn't necessarily entail strong AI, butprima facie it does. Scientific theoretic reasons could withstand the behavioral evidence, but presently none are withstanding. At the basic level, and fragmentarily at the human level, computers do things that we credit as thinking when humanly done; and so should we credit them when done by nonhumans, absent credible theoretic reasons against. As for general human-level seeming-intelligence if this were artificially achieved, it too should be credited as genuine, given what we now know. Of course, before the day when general human-level intelligent machine behavior comes if it ever does we'll have to know more. Perhaps by then scientific agreement about what thinking is will theoretically withstand the empirical evidence of AI. More likely, though, if the day does come, theory will concur with, not withstand, the strong conclusion: if computational means avail, that confirms computationalism.

And if computational means prove unavailing if they continue to yield decelerating rates of progress towards the "scaled up" and interconnected human-level capacities required for general human-level intelligence this, conversely, would disconfirm computationalism. It would evidence that computation alone cannot avail. Whether such an outcome would spell defeat for the strong AI thesis that human-level artificial intelligence is possible would depend on whether whatever else it might take for general human-level intelligence besides computation is artificially replicable. Whether such an outcome would undercut the claims of current devices to really have the mental characteristics their behavior seems to evince would further depend on whether whatever else it takes proves to be essential to thoughtper se on whatever theory of thought scientifically emerges, if any ultimately does.

Larry HauserEmail:hauser@alma.eduAlma CollegeU. S. A.

Read more here:

Artificial Intelligence | Internet Encyclopedia of Philosophy

The Airship and Futurism: Utopian Visions of the Airship …

Modern Mechanix magazine. October, 1934.

Airships have often served as the symbol of a brighter tomorrow.

Even before the first zeppelin was invented, airships featured prominently in utopian visions of the future. This 1898 poster advertised a musical comedy on the New York stage:

Musical theater poster. 1898.

And these German and Frenchpostcardspredicted air travel in theyear 2000:

German postcard, circa 1900

French postcard. 1910.

Futurists of the early 20th Century often combined lighter-than-air and heavier-than-air technology, as in this urban skyscraper airport and solar-powered aerial landing field:

Popular Science magazine. November, 1939

Modern Mechanix magazine. October, 1934.

This hybrid airship concept from 1943, designed to meet the needs of war, predicted the hybrid airships that would be built inthe 21st century.

Popular Science magazine, February 1943

Sometimes futurist airship visions were promoted by companies which were actually involved in the lighter-than-air business.

For example, the Goodyear-Zeppelin company, which built the American airships Akron and Macon, and which had a financial interest in the promotion of the passenger dirigible, frequently offered alluring illustrations of future airship travel.

Goodyear president Paul Litchfield and publicist Hugh Allen included the following pictures in their 1945 book, WHY? Why has America no Rigid Airships?:

These drawingsfrom Hugh Allens The Story of the Airship(1931)imaginedan Art Deco dining salon, promenade, and even a lounge with a fireplace.

Airships could even advance medical technology, such as this airshiptuberculosis hospital.

Under the illusion that communism was the way of the future, Soviet propagandists loved images of modernity and enlisted the airship in their cause.

Soviet poster, 1931. (We Are Building a Fleet of Airships in the Name of Lenin. Azeri text)

Sometimes illustrators got so carried away depicting lavish interiors that they neglected to leave room for much lifting gas, as in this illustration from The American Magazine.

The article described future airships to be built by the Goodyear-Zeppelin Company, which would be fitted up as sumptuously as a Palm Beach winter hotel:

The American Magazine. May, 1930.

This illustration of an atomic dirigible from a Soviet magazine in the 1960s left no room for lifting gas at all:

Soviet Atomic Dirigible

Modern Mechanics. July, 1931.

See the original post here:

The Airship and Futurism: Utopian Visions of the Airship ...

Cubo-Futurism | art movement | Britannica.com

Cubo-Futurism, Russian Budetlyanstvo, also called Russian Futurism, Russian avant-garde art movement in the 1910s that emerged as an offshoot of European Futurism and Cubism.

The term Cubo-Futurism was first used in 1913 by an art critic regarding the poetry of members of the Hylaea group (Russian Gileya), which included such writers as Velimir Khlebnikov, Aleksey Kruchenykh, David Burlyuk, and Vladimir Mayakovsky. However, the concept took on far more important meaning within visual arts, displacing the influence of French Cubism and Italian Futurism, and led to a distinct Russian style that blended features of the two European movements: fragmented forms fused with the representation of movement. The Cubo-Futurist style was characterized by the breaking down of forms, the alteration of contours, the displacement or fusion of various viewpoints, the intersection of spatial planes, and the contrast of colour and texture. Also typicaland one of the prominent aspects of the concurrent Synthetic Cubism movement in Pariswas the pasting of foreign materials onto the canvas: strips of newspaper, wallpaper, and even small objects.

Cubo-Futurist artists stressed the formal elements of their artwork, showing interest in the correlation of colour, form, and line. Their focus sought to affirm the intrinsic value of painting as an art form, one not wholly dependent on a narrative. Among the more notable Cubo-Futurist artists were Lyubov Popova (Travelling Woman, 1915), Kazimir Malevich (Aviator and Composition with Mona Lisa, both 1914), Olga Rozanova (Playing Card series, 191215), Ivan Puni (Baths, 1915), and Ivan Klyun (Ozonator, 1914).

Painting and other arts, especially poetry, were closely intertwined in Cubo-Futurism, through friendships among poets and painters, in joint public performances (before a scandalized but curious public), and in collaborations for theatre and ballet. Notably, the books of the transrational poetry (zaum) of Khlebnikov and Kruchenykh were illustrated with lithography by Mikhail Larionov and Natalya Goncharova, Malevich and Vladimir Tatlin, and Rozanova and Pavel Filonov. Cubo-Futurism, though brief, proved a vital stage in Russian art in its quest for nonobjectivity and abstraction.

Read more:

Cubo-Futurism | art movement | Britannica.com

Molecular Medicine Research – Wake Forest School of Medicine

The Section on Molecular Medicinefocuses on performing cutting-edge research in cellular and molecularmechanisms of human disease and supports graduate and postgraduate leveleducational programs within the Department of Internal Medicine. The Sectionserves as the administrative home for the largest PhD graduate program(Molecular Medicine and Translational Science) in the Biomedical Sciences atWake Forest University and an NIH-sponsored institutional predoctoral trainingprogram (T-32) in Integrative Lipid Sciences, Inflammation, and ChronicDiseases.

A major goal of the section is toserve as a nidus for translational research by providing an environment whereclinical and basic science faculty interact to make new discoveries and toeducate future scientists.

The section consists of ten (10) primary faculty members and one (1) Emeritus faculty member who use cellular and molecular approaches to gain abetter understanding of the basic mechanisms underlying several chronic humanconditions including: asthma, atherosclerosis, hepatosteatosis, obesity andinsulin resistance, autoimmunity, and age-related pathology (arthritis,Alzheimers disease).

A particular research focus isthe role of inflammation in the pathogenesis of acute and chronic humandiseases. Faculty research strengths are in areas of cell signaling, cellbiology, proteomics, regulation of gene expression, and the use of genetically-modifiedmouse models of human disease. The research in the section is supported bygrants from the NIH, from the Department of Defense, from foundations including the Avon Foundation and theAmerican Heart Association, and from partnerships with industry.

The section also provides acenter for laboratory research training and education in translational researchfor medical students, residents, and postdoctoral fellows includingsubspecialty fellows in the Department of Internal Medicine. A seminar seriesis held weekly in conjunction with the graduate program in Molecular Medicineand Translation Science.

John S. Parks, PhDProfessor of Internal Medicine, Biochemistry, and Translational ScienceChief, Section on Molecular Medicine

Molecular Medicine Journal Club

Faculty News

See the rest here:

Molecular Medicine Research - Wake Forest School of Medicine

"Eugenics: Its Definition, Scope and Aims" by Francis Galton

Francis Galton

THE AMERICAN JOURNAL OF SOCIOLOGYVolume X; July, 1904; Number 1

Read before the Sociological Society at a meeting in the School of Economies (London University), on May 16, 1904. Professor Karl Pearson, F.R.S., in the chair.

EUGENICS is the science which deals with all influences that improve the inborn qualities of a race; also with those that develop them to the utmost advantage. The improvement of the inborn qualities, or stock, of some one human population will alone be discussed here.

What is meant by improvement ? What by the syllable eu in "eugenics," whose English equivalent is "good"? There is considerable difference between goodness in the several qualities and in that of the character as a whole. The character depends largely on the proportion between qualities, whose balance may be much influenced by education. We must therefore leave morals as far as possible out of the discussion, not entangling ourselves with the almost hopeless difficulties they raise as to whether a character as a whole is good or bad. Moreover, the goodness or badness of character is not absolute, but relative to the current form of civilization. A fable will best explain what is meant. Let the scene be the zoological gardens in the quiet hours of the night, and suppose that, as in old fables, the animals are able to converse, and that some very wise creature who had easy access to all the cages, say a philosophic sparrow or rat, was engaged in collecting the opinions of all sorts of animals with a view of elaborating a system of absolute morality. It is needless to enlarge on the contrariety of ideals between the beasts that prey and those they prey upon, between those of the animals that have to work hard for their food and the sedentary parasites that cling to their bodies and suck their blood, and so forth. A large number of suffrages in favor of maternal affection would be obtained, but most species of fish would repudiate it, while among the voices of birds would be heard the musical protest of the cuckoo. Though no agreement could be reached as to absolute morality, the essentials of eugenics may be easily defined. All creatures would agree that it was better to be healthy than sick, vigorous than weak, well-fitted than ill-fitted for their part in life; in short, that it was better to be good rather than bad specimens of their kind, whatever that kind might be. So with men. There are a vast number of conflicting ideals, of alternative characters, of incompatible civilizations; but they are wanted to give fullness and interest to life. Society would be very dull if every man resembled the highly estimable Marcus Aurelius or Adam Bede. The aim of eugenics is to represent each class or sect by its best specimens; that done, to leave them to work out their common civilization in their own way.

A considerable list of qualities can easily be compiled that nearly everyone except "cranks" would take into account when picking out the best specimens of his class. It would include health, energy, ability, manliness, and courteous disposition. Recollect that the natural differences between dogs are highly marked in all these respects., and that men are quite as variable by nature as other animals of like species. Special aptitudes would be assessed highly by those who possessed them, as the artistic faculties by artists, fearlessness of inquiry and veracity by scientists, religious absorption by mystics, and so on. There would be self-sacrificers, self-tormentors, and other exceptional idealists; but the representatives of these would be better members of a community than the body of their electors. They would have more of those qualities that are needed in a state--more vigor, more ability, and more consistency of purpose. The community might be trusted to refuse representatives of criminals, and of others whom it rates as undesirable.

Let us for a moment suppose that the practice of eugenics should hereafter raise the average quality of our nation to that of its better moiety at the present day, and consider the gain. The general tone of domestic, social, and political life would be higher. The race as a whole would be less foolish, less frivolous, less excitable, and politically more provident than now. Its demagogues who "played to the gallery" would play to a more sensible gallery than at present. We should be better fitted to fulfil our vast imperial opportunities. Lastly, men of an order of ability which is now very rare would become more frequent, because, the level out of which they rose would itself have risen.

The aim of eugenics is to bring as many influences as can be reasonably employed, to cause the useful classes in the community to contribute more than their proportion to the next generation. The course of procedure that lies within the functions of a learned and active society, such as the sociological may become, would be somewhat as follows:

1. Dissemination of a knowledge of the laws of heredity, so far as they are surely known, and promotion of their further study. Few seem to be aware how greatly the knowledge of what may be termed the actuarial side of heredity has advanced in recent years. The average closeness of kinship in each degree now admits of exact definition and of being treated mathematically, like birth- and death-rates, and the other topics with which actuaries are concerned.

2. Historical inquiry into the rates with which the various classes of society (classified according to civic usefulness.) have contributed to the population at various times, in ancient and modern nations. There is strong reason for believing that national rise and decline is closely connected with this influence. It seems to be the tendency of high civilization to check fertility in the upper classes,- through numerous causes, some of which are well. known, others are inferred, and others again are wholly obscure. The latter class are apparently analogous to those which bar the fertility of most species of wild animals in zoological gardens. Out of the hundreds and thousands of species that have been tamed, very few indeed are fertile when their liberty is restricted and their struggles for livelihood are abolished; those which are so, and are otherwise useful to man, becoming domesticated. There is perhaps some connection between this obscure action and the disappearance of most savage races when brought into contact with high civilization, though there are other and well-known concomitant causes. But while most barbarous races disappear, some, like the negro, do not. It may therefore be expected that types of our race will be found to exist which can be highly civilized without losing fertility; nay, they may become more fertile under artificial conditions, as is the case with many domestic animals.

3- Systematic collection of facts showing the circumstances under which large and thriving families have most frequently originated; in other words, the conditions of eugenics. The definition of a thriving family, that will pass muster for the moment at least, is one in which the children have gained distinctly superior positions to those who were their classmates in early life. Families may be considered "large" that contain not less than three adult male children. It would be no great burden to a society including many members who had eugenics at heart, to initiate and to preserve a large collection of such records for the use of statistical students. The committee charged with the task would have to consider very carefully the form of their circular and the persons intrusted to distribute it. They should ask only for as much useful information as could be easily, and would be readily, supplied by any member of the family appealed to. The point to be ascertained is the status of the two parents at the time of their marriage, whence its more or less eugenic character might have been predicted, if the larger knowledge that we now hope to obtain had then existed. Some account would be wanted of their race, profession, and residence; also of their own respective parentages, and of their brothers and sisters. Finally the reasons would be required, why the children deserved to be entitled a "thriving" family. This manuscript collection might hereafter develop into a "golden book" of thriving families. The Chinese, whose customs have often much sound sense, make their honors retrospective. We might learn from them to show that respect to the parents of noteworthy children which the contributors of such valuable assets to the national wealth richly deserve. The act of systematically collecting records of thriving families would have the further advantage of familiarizing the public with the fact that eugenics had at length become a subject of serious scientific study by an energetic society.

4. Influences affecting marriage. The remarks of Lord Bacon in his essay on Death may appropriately be quoted here. He says with the view of minimizing its terrors: "There is no passion in the mind of men so weak but it mates and masters the fear of death ..... Revenge triumphs over death; love slights it; honour aspireth to it; grief flyeth to it; fear pre-occupateth it." Exactly the same kind of considerations apply to marriage. The passion of love seems so overpowering that it may be thought folly to try to direct its course. But plain facts do not confirm this view. Social influences of all kinds have immense power in the end, and they are very various. If unsuitable marriages from the eugenic point of view were banned socially, or even regarded with the unreasonable disfavor which some attach to cousin-marriages, very few would be made. The multitude of marriage restrictions that have proved prohibitive among uncivilized people would require a volume to describe.

5. Persistence in setting forth the national importance of eugenics. There are three stages to be passed through: (I) It must be made familiar as an academic question, until its exact importance has been understood and accepted as a fact. (2) It must be recognized as a subject whose practical development deserves serious consideration. (3) It must be introduced into the national conscience, like a new religion. It has, indeed, strong claims to become an orthodox religious, tenet of the future, for eugenics co-operate with the workings of nature by securing that humanity shall be represented by the fittest races. What nature does blindly, slowly, and ruthlessly, man may do providently, quickly, and kindly. As it lies within his power, so it becomes his duty to work in that direction. The improvement of our stock seems to me one of the highest objects that we can reasonably attempt. We are ignorant of the ultimate destinies of humanity, but feel perfectly sure that it is as noble a work to raise its level, in the sense already explained, as it would be disgraceful to abase it. I see no impossibility in eugenics becoming a religious dogma among mankind, but its details must first be worked out sedulously in the study. Overzeal leading to hasty action would do harm, by holding out expectations of a near golden age, which will certainly be falsified and cause the science to be discredited. The first and main point is to secure the general intellectual acceptance of eugenics as a hopeful and most important study. Then let its principles work into the heart of the nation, which will gradually give practical effect to them in ways that we may not wholly foresee.

FRANCIS GALTON. LONDON.

APPENDIX.

Works by the author bearing on eugenics.

Hereditary Genius ,(Macmillan), i869; 2d ed., r892. See especially from p. 340

in the former edition to the end, and from p. 329 in the latter.

Human Faculty (Macmillan), 1883 (out of print). See especially p. 305 to end.

Natural Inheritance (Macmillan), 1889. This bears on inheritance generally, not particularly on eugenics.

Huxley Lecture of the Anthropological Institute on "The Possible Improvement of the Human Breed under the Existing Conditions of Law and Sentiment," Nature, 1901, p. 659; "Smithsonian Report," Washington, 1901 p. 523.

DISCUSSION.

BY PROFESSOR KARL PEARSON.

My position here this afternoon requires possibly some explanation. I am not a member of the Sociological Society, and I must confess myself skeptical as to its power to do effective work. Frankly, I do not believe in groups of men and women who have each and all their allotted daily task creating a new branch of science. I believe it must be done by some one man who by force of knowledge, of method, and of enthusiasm hews out, in rough outline it may be, but decisively, a new block and creates a school to carve out its details. I think yon will find on inquiry that this is the history of each great branch of science. The initiative has been given by some one great thinker--a Descartes, a Newton, a Virchow, a Darwin, or a Pasteur. A sociological society, until we have found a great sociologist, is a herd without a leader---there is no authority to set bounds to your science or to prescribe its functions.

This, you must realize, is the view of that poor creature, the doubting man, in media vitae; it is a view which cannot stand for a moment against the youthful energy of your secretary, or the boyish hopefulness of Mr. Galton, who mentally is about half my age. Hence for a time I am carried away by their enthusiasm, and appear where I never anticipated being seen--in the chair at a meeting of the Sociological Society. If this society thrives, and lives to do yeoman work in science--which, skeptic as I am, I sincerely hope it may do--then I believe its members in the distant future will look back on this occasion as perhaps the one of greatest historical interest in its babyhood. To those of us who have worked in fields adjacent to Mr. Galton's, he appears to us as something more than the discoverer of a new method of inquiry; we feel for him something more than we may do for the distinguished scientists in whose laboratories we have chanced to work. There is an indescribable atmosphere which spreads from him and which must influence all those who have come within reach of it. We realize it in his perpetual youth; in the instinct with which he reaches a great truth, where many of us plod on, groping through endless analysis; in his absolute unselfishness; and in his continual receptivity for new ideas. I have often wondered if Mr. Galton ever quarreled with anybody. And to the mind of one who is ever in controversy, it is one of the miracles associated with Mr. Galton that I know of no controversy, scientific or literary, in which he has been engaged. Those who look up to him, as we do, as to a master and scientific leader, feel for him as did the scholars for the grammarian:

"Our low life was the level's, and the night's;He's for the morning."

It seems to me that it is precisely in this spirit that he attacks the gravest problem which lies before the Caucasian races "in the morning." Are we to make the whole doctrine of descent, of inheritance, and of selection of the fitter, part of our everyday life, of our social customs, and of conduct? It is the question of the study now, but tomorrow it will be the question of the marketplace, of morality, and of politics. If I wanted to know how to put a saddle on a camel's back without chafing him, I should go to Francis Galton; if I wanted to know how to manage the women of a treacherous African tribe, I should go to Francis Galton; if I wanted an instrument for measuring a snail, or an arc of latitude, I should appeal to Francis Galton; if I wanted advice on any mechanical, of any geographical, or any sociological problem, I should consult Francis Galton. In all these matters, and many others, I feel confident he would throw light on my difficulties, and I am firmly convinced that, with his eternal youth, his elasticity of mind, and his keen insight, he can aid us in seeking an answer to one of the most vital of our national problems: How is the next generation of Englishmen to be mentally and physically equal to the past generation which has provided us with the great Victorian statesmen, writers, and men of science--most of whom are now no more--but which has not entirely ceased to be as long as we can see Francis Galton in the flesh ?

BY DR. MAUDSLEY.

The subject is difficult, not only from the complexity of the matter, but also from the subtleties of the forces that we have to deal with. In considering the question of hereditary influences, as I have done for some long period of my life, one met with the difficulty, which must have occurred to everyone here, that in any family of which you take cognizance you may find one member, a son, like his mother or father, or like a mixture of the two. or more like his mother, or that he harks back to some distant ancestor; and then again you will find one not in the least like father or mother or any relatives, so far as you know. There is a variation, or whatever you may call it, of which in our present knowledge you cannot give the least explanation. Take, as a supreme instance, Shakespeare. He was born of parents not distinguished from their 'neighbors. He had five brothers living, one of whom came to London and acted with him at Blackfriars' Theater, and afterward died. Yet, while Shakespeare rose to the extraordinary eminence that he did, none of his brothers distinguished themselves in any way. And so it is in other families. From my long experience as a physician I could give instances in every department--in science, in literature, in art--in which one member of the family has risen to extraordinary prominence, almost genius perhaps, and another has suffered from mental disorder.

8 Now, how can we account for these facts on any of the known data on which we have at present to rely ? In my opinion, we shall have to go far deeper down than we have been able to go by any present means of observation--to the corpuscles, atoms, electrons, or whatever else there may be; and we shall find these subjected to subtle influences of mind and body during their formations and combinations, of which we hardly realize the importance. I believe that in these potent factors the solution of the problem may be found why one member of a family rises above others, and others do not rise above the ordinary level, but perhaps sink below it. To me it seems, when I consider this matter in regard to these difficulties, that in making a comparison with the improvement of breeding of animal stock we may be apt to be misled. We are all organic machines, so to speak; at the same time, when we come to the human being there are complexities which arise from the mental state and its moods and passions which entirely disturb our conclusions, which we should be able to form in regard to the comparatively simple machines which animals are.

In view of these difficulties of the subject, it has always seemed to me that we must not be hasty in coming to conclusions and laying down any rules for the breeding of humans and the development of a eugenic conscience. In fact, we must be on our guard against the overzeal, which Dr. Galton has very properly cautioned us against. For, after all, there is the passion of love and the forces referred to in his quotation from Bacon; and I am not sure but that nature, in its own blind impulsive way, does not manage things' better than we can by any light of reason, or by any rules which we can at present lay down. I am inclined to think that, as in the past, so in the future, it may be, as Shakespeare said:

"You may as well try to kindle snow by fire"As quench the fire of love by words."

BY DR. MERCIER.

Mr. Galton speaks of the laws of heredity, and dissemination of a knowledge of the laws of heredity in so far as we know them, and the qualification is very necessary. For, in so far as we know the laws, they are so obscure and complex that to us they work out as chance. We cannot detect any practical difference in the working of the laws of heredity and the way in which dice may be taken out of a lucky bag. It is quite impossible to predict from the constitution of the parents what the constitution of the offspring is going to be, even in the remotest degree. I lay that down as emphatically as I can, and I think that much widely prevailing erroneous doctrine on this head is due to the writings of Zola. I believe these writings are founded on a totally false conception as to what the laws of heredity are, and as to how they work out in the human race. He supposes that, since the parents have certain mental and moral peculiarities, the children will reproduce them with variations. It is not so. Look around among your acquaintance: look around among the people that you know; notice the intellectual and moral character of the parents and children; and, as my distinguished predecessor, Dr. Maudsley, has said, you wilt find that in the same family there are antithetic extremes. It is doubtful if moral traits are hereditary.

Then there is the tendency of a high civilization to reduce the fertility of its worthier members. It does seem as if there were some such tendency. Undoubtedly, in any particular race of organisms, as in organisms in general, the lower order multiplies more freely than the highly organized. Undoubtedly, we see that insects and bacteria increase and multiply exceedingly until they become as the sands on the seashore. But the elephant produces only once in thirty years. And so it is with human beings of different grades of organization. Undoubtedly, those more highly organized are less fertile than those lowly organized. But that is not the whole history of the thing. I think we have to regard a civilized community somewhat in the light of a lamp burning away at the top, replenished from the bottom. It is true that the highest strata waste and do not reproduce themselves; and it is of necessity so, because the production of very high types of human nature is always sporadic. It never occurs in races; it always occurs in individual cases.

I know I am speaking heresy in the presence of Dr. Galton. Some of these doctrines I am enunciating ought to be qualified. But, broadly and generally, and in practice, it is so, that we cannot predict from the parentage what the offspring is going to be, and we cannot go back from the offspring and say what the parentage was. [f we follow the custom of the Chinese and ennoble the parents for the achievements of their children, are we to hang the parents when the offspring commit murder ?

And, finally, I would say one word about suitable and unsuitable marriages. Most of what I have to say has already been said by Dr. Galton. What are suitable and unsuitable marriages? How are we to decide? In the light of our knowledge--I had better say ignorance, I think--he would be a very bold man who would undertake the duties that were intrusted to the family council among those wise and virtuous people of whom Dean Swift has given us a description, and who should determine who should be the father and who the mother, and make marriages without consulting the individuals most concerned. I think, if that were done, it is doubtful if the result would be any better than it is at present.

BY PROFESSOR WELDON.

There are two sets of objections which have been used against the points made by Dr. Galton: One set criticises the statistical method on the ground that it cannot account for a number of phenomena. In the presence of the author of the Grammar of Science, I venture to say it is no proper part of statistics to account for anything, but it is the triumph of statistics that it can describe, and with a very fair degree of accuracy, a large number of phenomena. And, as I conceive the matter, the essential object of eugenics is not to put forward any theory of causation of hereditary phenomena; it is to diffuse the knowledge of what these phenomena really are. We may not be able to account for the formation of a Shakespeare, but we may be able to tabulate a scheme of inheritance which will indicate with very fair accuracy, the percentage of cases in which children of exceptional ability result from a particular type of marriage. If we can do that alone, we shall have made a very great advance in knowledge. And my view of Mr. Galton's object is that he wishes to point out to us the way in which that knowledge may be attained. Well, that is the answer I would give to all objections to the statistical method, based on its inability to account for phenomena. It ought not to try to account for them, but to describe them. If Dr. Mercier would consult the studies on inheritance that result from Mr. Galton's labor, he would find that they describe distribution of character in the children of parents of particular kinds in regard to a very large number of characters, mental and physical. You, yourself, Mr. Chairman, have given such a comprehensive summary of those results, most of them achieved in your own laboratory., that I need not trouble this meeting by saying any more about them.

Then there is another class of objectors, whose attitude is summarized in the most interesting series of remarks by Mr. Bateson. Because a large number of apparently simple results have been attained in experimental breeding establishments, and especially by the Austrian abbot, Gregory Mendel, it has been too lightly assumed that these phenomena have henceforward superseded the actuarial method, and that the only reliable method is experiment on simple characters, such as those initiated by Mr. Mendel and carried out by Mr. Bateson in England, in Holland by Professor Defries, and by an increasing number of men all over Europe. But the statistical method is itself necessary in order to test the results of the experiments which are supposed to supersede it. The question whether there is really an agreement between experience and hypothesis is in nearly every case hard to answer, and can be achieved only by the use of this actuarial method which Mr. Galton has taught us to apply to biological problems.

The second answer to objections of that type seems to me to be this, that while it is perfectly true that by sound actuarial methods you may deduce a justifiable result, yet from a laboratory experiment you have not arrived at the formulation of a eugenic maxim. You must look at your facts in their relation to an enormous mass of other matter, and in order to do that you must treat large masses of your race in successive generations,and you must see whether the behavior of these large masses is such as you would expect from your limited experiment. If the two things agree, you have realized as much of the truth as would serve as a basis for generalization. But if you find there is a contradiction resulting from the facts--from the large masses and limited laboratory experiments-then there is no doubt whatever that, from the point of view of human eugenics, and from the theory of evolution, the more important data are those from the larger series of material; the less important are those from laboratory experiment.

BY MR. H. G. WELLS.

We can do nothing but congratulate ourselves upon the presence of one of the great founders of sociology here today, and upon the admirable address he has given us. If there is any quality of that paper more than another upon which I would especially congratulate Dr. Galton and ourselves, it is upon its living and contemporary tone. One does not feel that it is the utterance of one who has retired from active participation in life, but of one who remains in contact with and contributing to the main current of thought. One remarks that ever since his Huxley Lecture in 1901, Dr. Galton has expanded and improved his propositions.

This is particularly the case in regard to his recognition of different types in the community, and of the need of a separate system of breeding in relation to each type. The Huxley Lecture had no recognition of that, and its admission does most profoundly modify the whole of this question of eugenics. So long as the consideration of types is not raised, the eugenic proposition is very simple: superior persons must mate with superior persons, inferior persons must not have offspring at all, and the only thing needful is some test that will infallibly detect superiority. Dr. Galton has resorted in the past to the device of inquiring how many judges and bishops and such-like eminent persons a family can boast; but that test has not gone without challenge in various quarters. Dr. Galton's inquiries in this direction in the past have always seemed to me to ignore the consideration of social advantage, of what Americans call the "pull" that follows any striking success. The fact that the sons and nephews of a distinguished judge or great scientific man are themselves eminent judges or successful scientific men may after all, be far more due to a special knowledge of the channels of professional advancement than-to any distinctive family gift. I must confess that much of Dr. Galton's classical work in this direction seems to me to be premature. I have been impressed by the idea--and even now I remain under the sway of the idea--that our analysis of human faculties is entirely inadequate for the purpose of tracing hereditary influence. I think we want a much more elaborate analysis to give us the elements of heredity--an analysis of which we have at present only the first beginnings in the valuable work of the Abbe Loisy that Mr. Bateson has recently revived.

Even the generous recognition of types that Dr. Galton has now made does not altogether satisfy my inquiring mind. I believe there still remain further depths of concession for him. At the risk of being called a "crank," I must object that even' that considerable list of qualities Dr. Galton tells us that everyone would take into account does not altogether satisfy me. Take health, for example. Are there not types of health? The mating of two quite healthy persons may result in disease. I am told it does so in the case of the interbreeding of healthy white men and healthy black women about the Tanganyka region; the half-breed children are ugly, sickly, and rarely live. On the other hand, two not very healthy persons may have mutually corrective qualities, and may beget sound offspring. Then what right have we to assume that energy and ability are simply qualities ? I am not even satisfied by the suggestion Dr. Galton

seems to make that criminals should not breed. I am inclined to believe that a large proportion of our present-day criminals are the brightest and boldestmembers of families living under impossible conditions, and that in many desirable qualities the average criminal is above the average of the law-abiding poor and probably of the average respectable person. Many eminent criminals appear to me to be persons superior in many respects--in intelligence, initiative, originality---to the average judge. I will confess I have never known either.

Let me suggest that Dr. Galton's concession to the fact that there are differences of type to consider is only the beginning of a very big descent of concession, that may finally carry him very deep indeed. Eugenics, which is really only a new word for the popular American term "stirpiculture," seems to me to be a term that is not without its misleading implications. It has in it something of that same lack of a fine appreciation of facts that enabled Herbert Spencer to coin those two most unfortunate terms, "evolution" and "the survival of the fittest." The implication is that the best reproduces and survives. Now really it is' the better that survives, and not the best. The real fact of the case is that in the all-around result the inferior usually perish, and the average of the species rises, but not that any exceptionally favorable variations get together and reproduce. I believe that now and always the conscious selection of the best for reproduction will be impossible; that to propose it is to display a fundamental misunderstanding of what individuality implies. The way of nature has always been to slay the hindmost, and there is still no other way, unless we can prevent those who would become the hindmost being born. It is in the sterilization of failures, and not in the selection of successes for breeding, that the possibility of an improvement of the human stock lies.

BY DR. ROBERT HUTCHISON.

My only claim to address a meeting on this subject is that not only, in common with all physicians, am I acquainted with the factors that make for physical deterioration, but I have devoted special attention to certain factors which t believe play a large part in the production of human types. I refer to feeding. I believe we have, in treating this subject, to consider two lines in which a society like this might work. It has to consider, first, the raw material of the race--and that I believe to be the view which commends itself especially to Dr. Galton --- and, second, the conditions under which that raw material grows up. I believe, speaking as a physician, and judging from the raw material one sees, for example, in the children's hospitals, that it is not so necessary to improve the raw material, which is not so very bad after all, as it is to improve the environment in which the raw material is brought up. I Of all the factors in that environment, that which is of the greatest importance in promoting bad physical and bad mental development, is, I believe, the food factor. If you would give me a free hand in feeding, during infancy and from ten to eighteen years of age, the raw material that is being produced, I would guarantee to give you quite a satisfactory race as the result. And I think we should do more wisely in concentrating our attention on things such as those, than in losing ourselves in a mass of scientific questions relating to heredity, about which, it must be admitted, in regard to the human race, we are still profoundly in ignorance.

BY DR. WARNER.

When I had the pleasure of reading the proof of Mr. Galton's paper, I devoted what time I could to thinking carefully over what might be expected to be the practical outcome of what I had understood from that paper, if I had. read it aright. And a careful reading of Mr. Galton's paper shows that he purposely deals with only a portion of the means of developing a good nation, and that portion is marriage selection. I also gather that the tendency of the paper is to advocate the marriage between those who are most highly evolved in their respective families. But there is a point in this connection which I think is apt to be overlooked, and that is the examples we have of dangers from intermarriage between highly evolved members of two families. A considerable number of degenerates come under my observation and come to me professionally. They are mostly children; and, as far as possible, I get what knowledge I can of their families both on the paternal and the maternal side. It happens in a very considerable proportion that the father and mother are the best of the families from which they themselves have proceeded. Where a man has evolved from a humble class to a high form of mental work, and his life has attracted the feeling or affection of a lady who has evolved rather higher mental faculties than the rest of her family, there is danger. It happens very often that the parents of degenerate children are the best of their respective families. I do not go into any details, but I could give you a string of cases, straight off, to show how frequent it is among the families of men who have risen, that the first of all, if he is a male, is feeble-minded, or degenerate. There is also the great question of the girls, as well as the boys, in their personal evolution. It has been constantly said that one reason why apparently the girls' capacity is less than the boys' capacity for many sorts of .work is that their mothers have not been educated. Now, I should like to ask Mr. Galton whether the girls inherit through the mother or through the father. For myself, I extremely doubt the general view.

BY MR. ELDERTON.

An important item in the study of heredity is the heredity of disease; and, if so, life-insurance offices might be of use with certain statistics. Certificates of death are given to them which are put away with the original proposal papers, filled up when the insurance was taken out, which state the cause of death of parents, brothers, and 'sisters, and their ages at death; also their ages when the person effected the insurance, if they were still living. Locked up in that sort of information are many data for the study of heredity in relation to disease. From this source also might be thrown light on a question of great importance--the correlation between specific diseases and fertility.

One point in conclusion: Dr. Hutchison spoke of the greater importance of environment, but in that he would hardly get actuaries to agree with him. Their observation, based on life-insurance data, would seem to show that environment operates as a mere modificatory factor after heredity has done its work.

BY BENJAMIN KIDD.

It is, I am sure, a peculiar satisfaction to have from Mr. Galton this important and' interesting paper. No man of science in England has done more to encourage the study of human faculty by exact methods, and I hope the Sociological Society will endeavor to follow the example he has set us. The only item of criticism I would offer would be to say that we must not, perhaps, be sanguine in expecting too much at present from eugenics founded on statistical and actuarial methods in the study of society. We must have a real science of society before the science of eugenics can hope to gain authority. The point of Mr. Galton's paper is, I think, that, however we may differ as to other standards, we are, at .all events, all agreed as to what constitutes the fittest and most perfect individual. I am not quite convinced of this. Much obscurity at present exists in sociological studies from confusing two entirely different things, namely, individual efficiency and social efficiency. Mr. Galton's fable of the animals will help me to make my meaning clear. It will be observed that he has considered the animals as individuals. If, however, we took a social type like the social insects, a contradiction which, I think, possibly underlies his example, might be visible. For instance, it is well known that all the qualities of the bees are devoted to attaining the highest possible efficiency of their societies. Yet these qualities are by no means the qualities which we would consider as contributing to a perfect individual. If the bees at some earlier stage of evolution understood eugenics, as we now understand the subject, what peculiar condemnation, for instance, would they have visited on the queen bee, who devotes her life solely to breeding? I am afraid, too, that the interesting habits of the drones would have received special condemnation from the unctuous rectitude of the time. What would have been thought even of the workers as perfect individuals with their undeveloped bodies and aborted instincts? And yet all these things have contributed in a high degree to social efficiency, and have undoubtedly made the type a winning one in evolution.

The example will apply to human society. Statistical and actuarial methods alone in the study of individual faculty often carry us to very incomplete conclusions, if not corrected by larger and more scientific conceptions of the social good. I remember our chairman, in his earlier social essays, once depicted an ideally perfect state of society. I have a distinct recollection of my own sense of relief that my birth had occurred in the earlier ages of comparative barbarism. For Mr. Pearson, I think, proposed to give the kind of people who now scribble on our railway carriages no more than a short shrift and the nearest lamp-post. I hope we shall not seriously carry this spirit into eugenics. It might renew, in the name of science, tyrannies that it took long ages of social evolution to emerge from. Judging from what one sometimes reads, many of our ardent reformers would often be willing to put us into lethal chambers, if our minds and bodies did not conform to certain standards. We are apt to forget in these matters that that sense of responsibility to life which distinguishes the higher societies is itself an asset painfully acquired by the race--a social asset of such importance that the more immediate gain aimed at would count by the side of it as no more than dust in the balance. Our methods of knowledge are as yet admittedly very imperfect. Mr. Galton himself, I remember, as the result of his earlier researches into human faculty, put the intellectual caliber of what are called the lower races many degrees below that of the European races. I ventured to point out some years ago that this assumption appeared to be premature, and the data upon which it was founded insufficient. So much is now generally admitted. Yet it would have been awkward had we proceeded to draw any large practical conclusion from it at the time. The deficiency of what have been called the lower races is now seen to be, not so much an intellectual deficiency, as a deficiency in social qualities and social history, and therefore in social inheritance.

Many examples of a similar kind might be given. It may be remembered, for instance, how a generation or two ago Malthusianism was urged upon us in the name of science and almost with the zeal of a religion. We have lived to see the opposite view now beginning to be urged with much the same zeal and emphasis. A nation or a race cannot afford to make practical mistakes on a large scale in these matters.

I trust and believe that much that Mr. Galton anticipates will be realized. But I think we must go slowly with our science of eugenics, and that we must take care, above all things, that it advances with, and does not precede, a real science of our social evolution. We must come to the work in a humble spirit. Even the highest representatives of the various social sciences must realize that in the specialized study of sociology as a whole they are scarcely more than distinguished amateurs. Otherwise, in few other departments of study would there be so much danger of incomplete knowledge, and even of downright quackery, clothing itself with the mantle and authority of science.

BY MRS. DR, DRYSDALE VICKERY.

The speech which has interested me most is that of Dr. Hutch{son. Important as is the quality of hereditary stock, yet at the present juncture I would say that of still greater importance is this, that we have such a vast number of our population growing up under bad conditions. The result is an artificial, a merely economic, multiplication of inferior stocks. The question I wish to raise is this: Are we producing, in this country and in all civilized countries, a greater proportion of new individuals than can be favorably absorbed? In a country like Russia the surplus of births over deaths amounts to two millions in the year; in Germany the surplus is a million; in Britain, not quite half a million. Can we, in an old state of society, absorb that amount of new individuals and give them fair conditions of existence ? I think not.

Dr. Warner spoke of the importance of our teaching of girls. I hold very strongly that the question of heredity, as we study it at present, is very much a question of masculine heredity only, and that heredity with feminine aspects is very much left out of account. Mr. Galton told us that a certain number of burgesses' names had absolutely disappeared; but what about the names of their wives, and how would that consideration affect his conclusion? In the future, the question of population will, I hope, be considered very much from the feminine point of view; and if we wish to produce a well-developed race, we must treat our womankind a little better than we do at present. We must give them something more like the natural position which they should hold in society. Women's specialized powers must be utilized for the intellectual advancement of the race.

BY LADY WELBY.

The science of eugenics as not only dealing with "all influences that improve the inborn qualities of a race," but also "with those that develop them to the utmost advantage," must have the most pressing interest for women. And one of the first things to do--pending regulative reform--is to prepare the minds of women to take a truer view of their dominant natural impulse toward service and self-sacrifice. They need to realize more clearly the significance of their mission to conceive, to develop, to cherish, and to train--in short, in all senses to mother--the next and through that the succeeding generations of man.

As things are they have almost entirely missed the very point both of their special function and of their strongest yearnings. They have lost that discerning guidance of eugenic instinct and that inerrancy of eugenic preference which, broadly speaking, in both sexes have given us the highest types of man yet developed. The refined and educated woman of this day is brought up to countenance, and to see moral and religious authority countenance, social standards which practically take no account of the destinies and the welfare of the race. It is thus hardly wonderful that she should be failing more and more to fulfil her true mission, should indeed too often be unfaithful to it, spending her instinct of devotion in unworthy, or at least barren, directions. Yet, once she realizes what the results will be that she can help to bring about, she will be even more ready than the man to give herself, not for that vague empty abstraction, the "future," but for the coming generations among which her own descendants may be reckoned. For her natural devotion to her babe--the representative of the generations yet to come--is even more complete than that of her husband, which indeed is biologically, though she knows it not, her recognition in him of the means to a supreme end.

But it is not only thus that women are concerned with the profound obligation to the race which the founder of the science of eugenics is bringing home to the social conscience. At present, anyhow, a large proportion of civilized women find themselves from one or another cause debarred from this social service in the direct sense.

There is another kind of race-motherhood open to, and calling for the intelligent recognition and intelligent fulfillment by, all women. There are kinds of natural and instinctive knowledge of the highest value which the artificial social conditions of civilization tend to efface. There are powers of swift insight and penetration--powers also of unerring judgment-- which are actually atrophied by the ease and safety secured in highly organized communities. These, indeed, are often found in humble forms, which might be called in-sense and fore-sense.

While I would lay stress on the common heritage of humanity which gives the man a certain motherhood and the woman a certain fatherhood in outlook, perhaps also in intellectual function, we are here mainly concerned with the specialized mental activities of women as distinguished from those of men. It has long been a commonplace that women have, as a rule, a larger share of so-called "intuition" than men. But the reasons for this, its true nature and its true work and worth, have never, so far as I know, been brought forward. It is obvious that these reasons cannot be properly dealt with--indeed, can but barely be indicated--in these few words. They involve a reference to all the facts which anthropology, archaeology, history, psychology, .and physiology, as well as philology, have so far brought to our knowledge. They mean a review of these facts in a new light--that which, in many cases, the woman who has preserved or recovered her earlier, more primitive racial prerogative can alone throw upon them.

I will only here mention such acts as the part primitively borne by women in the evolution of crafts and arts, including the important one of healing; and point out the absolute necessity, since an original parity of muscular development in the animal world was lost, of their meeting physical coercion by the help of keen, penetrative, resourceful wits, and the "conning" which (from the temptation of weakness to serve by deception) became what we now mean by "cunning." To these I think we may add the woman's leading part in the evolution of language. While her husband was the "man of action," and in the heat of the chase and of battle, or the labor of building huts, making stockades, weapons, etc., the "man of few words," she was necessarily the talker, necessarily the provider or suggester of symbolic sounds, and with them of pictorial signs, by which to describe the ever-growing products of human energy, intelligence, and constructiveness, and the ever-growing needs and interests of the race; in short, the ever-widening range of social experience.

We are all, men and women, apt to be satisfied now--as we have just been told, for instance, in the Faraday Lecture--with things as they are. But that is just what we all came into the world to be dissatisfied with. And while it may now be said that women are more conservative than men, they still tend to be more adaptive. If the fear of losing by violent change what has been gained [or the children were removed, women would be found, as of old, in the van or all social advance.

Lastly I would ask attention to the fact that throughout history, and I believe in every part of the world, we find the elderly woman credited with wisdom and acting as the trusted adviser of the man. It is only in very recent times and in highly artificial societies that we have begun to describe the dense, even the imbecile, man as an "old woman." Here we have a notable evidence indeed of the disastrous atrophy of the intellectual heritage of woman, of the partial paralysis of that racial motherhood out of which she naturally speaks! Of course, as in all such eases, the inherited wisdom became associated with magic and wonder-working and sybilline gifts of all kinds. The always shrewd and often really originative, predictive, and wide-reaching qualities of the woman's mind (especially after the climacteric had been passed) were mistaken for the uncanny and devil-derived powers of the sorceress and the witch. Like the thinker, the moralist, and the healer, she was tempted to have recourse to the short-cut of the "black arts," and appeal to the supernatural and miraculous, as science would now define these. We still see, alas, that the special insight and intelligence of women tends to spend itself at best on such absurd misrepresentations of her own instincts and powers as "Christian Science;" or worse, on clairvoyance and fortune-telling and the like. Then, it may be, elaborate theories of personality--mostly wide of the mark, and constructed upon phenomena which we could learn to analyze and interpret on strictly scientific and really philosophical principles, and thus to utilize at every point. We are, in short, failing to enlist for true social service a natural reserve of intelligence which mostly lying unrecognized and unused in any healthy form, forces its way out in morbid ones. And let us here remember that we are not merely considering a question of sex. No mental function is entirely unrepresented on either side.

The question then arises: How is civilized man to avail himself fully of this reserve of power ? The provisional answer seems to be: By making the most of it through the training of all girls for the resumption of a lost power of race-motherhood which shall make for their own happiness and well-being, in using these for the benefit of humanity; in short, by making the most of it through truer methods in education than any which have yet, except in rare cases, been applied. Certainly until we do this many social problems of the highest importance will needlessly continue to baffle and defeat us.

BY MR. HOBHOUSE.

I feel a good deal of difficulty in intervening in this extremely interesting discussion at this stage. I, like many of you, am only a listener to what thc biologists have to tell us in this matter. Until we have very definite information as to what heredity can do, I think those of us who are only students of sociology, and who cannot lay any claim whatever to be biologists, ought to keep silence. We have this afternoon had extremely divergent views put before us as to the actual and probable operation of heredity, and it seems quite clear that before we begin to tackle this question, which deals with one of the most powerful of human passions, with a view to regulate it, we must have highly perfected knowledge. We must have the chart properly mapped out before we do anything that might lead us into greater danger than we at present incur.

As to the two factors, stock and environment, no one can doubt that both are of fundamental importance in relation to the welfare of society; and no one can doubt that, if the kind of precise knowledge which I desiderate could be laid before us by the biologist, it would have considerable influence on our views of what is not only ethically right, but what could be legislatively enforced. Of these two factors, stock and environment, which can we modify with the greater ease and certainty of not doing harm ? It is fairly obvious that we can affect the environment of mankind in certain definite ways. We have the accumulation of considerable tradition as to the way a given act will affect the social environment. When we come to bring stock into consideration, we are still dealing with that which is very largely unknown. At the same time, we owe a great deal of thanks to Mr. Galton for raising this subject. On the one hand, it seems to me that the bare conception of a conscious selection as a way in which educated society would deal with stock is infinitely higher than natural selection with which biologists have confronted every proposal of sociology. If we are to take the problem of stock into consideration at all, it ought to be in the way of intelligently handling the blind forces of nature. But until we have far more knowledge and agreement as to criteria of conscious selection, I fear we cannot, as sociologists, expect to do much for our society on these lines.

BY G. A. ARCHDALL REID, M.D.

I think it would be impossible to imagine a subject of greater importance, or to name one of which the public is more ignorant. At the root of every moral and social question lies the problem of heredity. Until a knowledge of the laws of heredity is more widely diffused, the public will grope in, the dark in its endeavors to solve many pressing difficulties.

How shall we bring about a "wide dissemination of a knowledge of the laws of heredity, so far as they are surely known, and the promotion of their further study" ? We shall not be able to reach the public until we are able to influence the education of a body of men whose studies naturally bring them into relation with the subject, and who, when united, are numerous enough and powerful enough to sway public opinion. Only one such body of men exists- the medical profession. When the study of heredity forms as regular a part of the medical curriculum as anatomy and physiology, then, and not till then, will the laws of heredity be brought to bear on the solution of social problems. At present a specialist like Mr. Galton has a very limited audience. In effect, it is composed of specialists like himself. Until among medical men a systematic knowledge of heredity is substituted for a bundle of prejudices, and close and clear reasoning for wild guesswork, the influence of men of Mr. Galton's type most unhappily is not likely to extend much beyond the limits of a few learned societies.

The first essential is a clear grasp of the distinction which exists between what are known as inborn traits and what are known as acquired traits. Inborn traits are those with which the individual is "born," which come to him by nature, which form his natural inheritance from his parents. Acquired traits are alterations produced in inborn traits by influences to which they are exposed during the life of the individual. Thus a man's limbs are inborn traits, but the changes produced in his limbs by exercise, injury, and so forth are acquired traits. All men know that the individual tends to transmit his inborn traits to his offspring. But it is now almost universally denied by students of heredity that he tends to transmit his acquired traits. The real, the burning question among students of heredity is whether changes in an individual caused by the action of the environment on him tend in any way to affect the offspring subsequently born to him. Thus, for example, does good health in an individual tend to benefit his offspring? Does his ill-health tend to enfeeble them ?

It is generally assumed that changes in the parents do tend to influence the inborn traits of offspring. Thus we have heard much of the degeneracy which it is alleged is befalling our race owing to the bad hygienic conditions under which it dwells in our great growing cities. The assumption is made that the race is being so injured by the bad conditions that the descendant of a line of slum-dwellers, if removed during infancy to the country, would, on the average, be inferior physically to the descendant of a line o{ rustics; whereas, contrariwise, the descendant of a line of rustics, if removed during infancy to the slums would be superior physically to the majority of the children he would meet there.

I believe this assumption to be a totally unwarrantable one. It is founded on a confusion between inborn and acquired traits. Of course, the influences which act on a slum-bred child tend to injure him personally. But there is no certain evidence that the descendant of a line of slum-dwellers is on the average inferior to the descendant of a line of rustics whose parents migrated to the slums just after his birth. I believe in fact, that while a life in the slums deteriorates the individual, it does not effect directly the hereditary tendencies of the race in the least. A vast mass of evidence may be adduced in support of this contention. Slums are not a creation of yesterday. They have existed in many countries from very ancient times. Races that have been most exposed to slum life cannot be shown to he inferior physically and mentally to those that have been less or not at all exposed. The Chinese, for example, who have been more exposed, and for a longer time, to such influences than any other people, are physically and mentally a very fine race, and certainly not inferior to the Dyacks of Borneo, for example.

There is also a mass of collateral evidence. Thus Africans and other races have been literally soaked in the extremely virulent and abundant poison of malaria for thousands of years. We know how greatly malaria damages the individual. But Africans have not deteriorated. Like the Chinese, physically, at any rate, they are a very fine race. Practically speaking, every negro child suffers from malaria, and may perish of it. But while the sufferings of the negroes from malaria have produced no effect on the race, the deaths of negroes from malaria have produced an immense effect. The continual weeding out, during many generations, of the unfittest has rendered the race pre-eminently resistant to malaria; so that negroes can now flourish in countries which we, who have suffered very little from malaria, find it impossible to colonize. Similarly, the inhabitants of northern Europe have suffered greatly for thousands of years from consumption, especially in places where the population has been dense--where there have been many cities and towns, and therefore slums. They also have not deteriorated; they have merely grown pre-eminently strong against consumption. They are able to live, for example, in English cities, in which consumption is very rife, and which individuals of races which have been less exposed to the disease find as dangerous as Englishmen find the west coast of Africa.

During the last four hundred years consumption has spread very widely, and now no race is able to dwell in cities and towns, especially in cold and temperate climates, that has not undergone evolution against it. In other words, no race is capable of civilization that has not undergone evolution against consumption, as well as against other diseases and influences, deteriorating to the individual, which civilization brings in its train. Many biologists and most medical men believe that influences acting on parents tend directly to alter the hereditary tendencies of offspring. In technical terms, they believe that variations are caused by action of the environment. How they contrive to do so in the face of the massive and conclusive evidence afforded by the natural history of human races in relation to disease is beyond my comprehension. How could a race undergo evolution against malaria (for example), if parental disease altered and injured the hereditary tendencies of the offspring. How could natural selection select, if all the variations presented for selection were unfavorable. The observations on disease and injury published by Brown-Sequard, Cossar Ewart, and many medical men are capable of an interpretation different to that which they have given.

Mr. Galton speaks as if the causes which have brought about the disappearance of most savage races when brought in contact with high civilization were obscure. I can assure him, however, that they have been worked out precisely and statistically by many medical observers on the spot. Apart from extermination by war, the only savage races which are disappearing are those of the New World, and in every instance they are perishing from the enormous mortality caused among them by introduced diseases against which their races have undergone no evolution. He will find these precise statistics in the tables of mortality issued by all the public health departments that exist in America, Polynesia, and Australasia. He will find also many accounts in the journals of travelers. If he will read the records of visits of parties of aborigines from the New World to the cities of Europe, he will find that their mortality, especially from consumption, was invariably high. There is nothing more mysterious about the disappearance of these races than there is about the disappearance of the dodo and the bison. They are perishing, not because, as Froude poetically puts it, they are like "caged eagles," incapable of domestication, but simply and solely because they are weak against certain diseases. If malaria instead of consumption were prevalent in cities, the English would be incapable of civilization, whereas the negroes and the wild tribes about the Amazon, and in New Guinea and Borneo, would be particularly capable of it. Indeed, it may be taken as a general rule, to which there is no exception, that every race throughout the world is resistant to every disease precisely in proportion to its past experience of it, and that only those races are capable of civilization which are resistant to the diseases of dense populations.

Before the voyage of Columbus, hardly a zymotic disease, with the exception of malaria, was known in the New World. The inhabitants of the Old World had slowly evolved against the diseases of civilized life under gradually worsening conditions, caused by the gradual increase of population, and therefore of disease. They introduced these maladies to the natives of the New World under the worst conditions then known. They built cities and towns, the natural breeding-places of all zymotic diseases, except those of the malarial type. They gave the natives clothes, which are the best vehicle for the transport of microbes. They endeavored to Christianize and civilize the natives, and so drew them into buildings where they were infected. They forced them to labor on plantations and in mines. In fact, they forced on them every facility for "catching" disease. As a result, they exterminated or almost exterminated them. The natives of the Gilbert Islands lately petitioned our government not to permit missionaries to settle among them, as they feared destruction. They were perfectly right. Clothes and churches and schoolrooms are fatal to such people. The Tasmanians, before they were quite exterminated, had a saying that good people--that is, people who went frequently to church--died young. They also were perfectly right--that is, as regards their own race.

It is a highly significant fact that, whereas every white man's city in Asia or Africa has its native quarter, no white man's city in the New World has a native quarter. To find the pure aborigines of the New World we must go to parts remote from cities and towns. They cannot accomplish in a few generations an evolution which the natives of the Old World accomplished only after hundreds, perhaps thousands, of generations, and at the cost of millions of lives. The negroes, who were introduced into America to fill the void created by the disappearing aborigines, have perhaps persisted, but they had already undergone some evolution against consumption--the chief disease of civilization--and much evolution against measles and other diseases. Yet even the negroes would not have persisted had they not been introduced under special conditions. They were taken to the warmer parts of America at a time when consumption was little rife as compared to its prevalence in the cities of Europe, and they were employed mainly in agricultural occupations. They had a special start, and were placed under conditions that worsened only slowly. As a result they underwent evolution, and are now able to persist in America. But African negroes, as compared to the natives of the densely populated parts of Europe and Asia, have undergone little evolution against consumption. As a consequence, no African colony has ever succeeded in Europe or Asia. For instance, the Dutch and English imported about twelve thousand negroes into Ceylon a century ago. Within twenty years all had perished, mainly of consumption, and that in a country where the disease is not nearly so prevalent as in northern Europe, or the more settled parts of northern Asia.

There can be little doubt that the sterility of the New World races when brought into contact with civilization is due mainly to ill-health. The sterility of our upper classes is mainly voluntary. It is due to the possession of special knowledge. The growing sterility of the lower classes is due to the spread of that knowledge; hence the general and continuous fall in the birth-rate. Until we are able to estimate the part played by this knowledge it would be vain to collect statistics of comparative sterility.

We have frequently been told that no city family can persist for four generations unless fortified by country blood. That, I believe, is a complete error. Country blood does not strengthen city blood. It weakens it, for country blood has been less thoroughly purged of weak elements. It is true, owing to the large mortality in cities and the great immigration from the country, it is difficult to find a city family which has had no infusion of country blood for four generations. But to suppose on that account that country blood strengthens city blood against the special conditions of city life is to confuse post hoc with propter hoc.

Slum life and the other evil influences of civilization, including bad and insufficient food, vitiated air, and zymotic diseases, injure the individual. They make him acquire a bad set of traits. But they do not injure the hereditary tendencies of the race. Had they done so, civilization would have been impossible. Civilized man would have become extinct. On the contrary, by weeding out the unfittest, they make the race strong against those influences.

If, then, we wish to raise the standard of our race, we must do it in two ways. In the first place, we must improve the conditions under which the individual develops, and so make him a finer animal. In the second place, we must endeavor to restrict, as much as possible, the marriage of the physically and mentally unfit. In other words, we must attend both to the acquired characters and to inborn characters. By merely improving the conditions under which people live we shall improve the individual, but not the race. The same measures will not achieve both objects. Medical men have done a good deal for the improvement of the acquired characters of the individual by improving sanitation. They have attempted nothing toward the second object, the improvement of the inborn traits of the race. Nor will they attempt anything until they have acquired a precise knowledge of heredity from biologists. On the other hand, before biologists are able to influence medical men they must bring to bear their exact methods of thought on the great changes produced in various races by their experience, during thousands of years, of disease. I am sure our knowledge of heredity will gain in precision and breadth by a consideration of these tremendous, long-continued, and drastic experiments conducted by nature. No experiments conducted by man can compare with them in magnitude and completeness. And, as I have already intimated, the precise statistical information on which our conclusions may be based is already collected and tabulated. I am quite sure it is good neither for medicine nor biology that medical men and biologists should live as it were in separate and closed compartments, each body ignoring the splendid mass of data collected by the other. Much of medicine should be a part of biology, and much of biology a part of medicine.

BY W. LESLIE MACKENZIE, M.A., M.D.

It is to me a great privilege to be permitted to say something in any discussion where Dr. Francis Galton is leader; because from early in my student days until now I have felt that his method of handling sociological facts has always been at once scientific and practical. Whether the ideas he represents have had some subconscious effect in driving me into the public-health service, I cannot tell; but since I entered that service fourteen years ago, I have been in a multitude of minor ways impressed with two things: first, that in every Scottish community, rural and urban, a hygienic renascence is in progress; second, that in the many forms it assumes, it has no explicit basis in scientific theory. In attempting, some time ago, to penetrate to the root-idea of the public-health movement, I concluded that, rightly or wrongly, we have all taken for granted certain postulates. The hygienic renascence is the objective side of a movement whose ethical basis is the set effort after a richer, cleaner, intenser, life in a highly organized society. The postulates of hygienics -- whose administrative form constitutes the public-health service- are such as these: that society or the social group is essentially organic; that the social organism, being as yet but little integrated, is capable of rapid and easy modification, that is, of variations secured by selection; that disease is a name for certain maladaptations of the social organism or of its organic units; that diseases are thus, in greater or lesser degrees, preventable; that the prevention of disease promotes social evolution; that, by the organization of representative agencies--county councils, town councils, district councils, parish councils, and the like--the processes of natural selection may be indefinitely aided by artificial selections; that thus, by continuous modification of social organism, of its organic units, and of the compound environment of both, it is possible to further the production of better citizens--more energetic, more alert, more versatile, more individuated. Provisionally, public health may be defined as the systematic application of scientific ideas to the extirpation of diseases and thereby to the direct or indirect establishment of beneficial variations both in the social organism and in its organic units. In more concrete form, it is an organized effort of the collective social energy to heighten the physiological normal of civilized living.

A science of hygienics might thus be regarded as almost equivalent to the science of eugenics; character is presupposed in both. The fundamental assumption of hygienics is that the human organism is capable of greater things than on the average it has anywhere shown, and that its potentialities can be elicited by the systematic improvement of the environment. From the practical side, hygienics aims at "preparing a place" for the highest average of faculty to develop in.

Take heredity--one of Dr. Galton's points. The modern movement for the extirpation of tubercular phthisis began with the definite proof that the disease is due to a bacillus. But the movement did not become world-wide until the belief in the heredity of tuberculosis had been sapped. So long as the tubercular person was weighted by the superstition that tubercular parents must necessarily produce tubercular children, and that the parents of tubercular children must themselves have been tubercular, he had little motive to seek for cure, the fatalism being here supported by the alleged inheritance of disease. Now that he knows how to resist the invasion of a germ, he is proceeding in his multitudes to fortify himself. What is true of tuberculosis is true of many other infections. Consequently, every hygienist will agree with Dr. Galton that the dissemination of a true theory of heredity is of the first practical importance. Nor is the evil of a wrong theory of heredity confined to infectious disease. If the official "nomenclature of diseases" be carefully scrutinized, it will be found that the vast majority of diseases are due either to the attacks of infective or parasitic organisms, or to the functional stress of environment, which for this purpose is better named "nurture." This has recently been borne in upon me by the examination of school children. The conclusion inevitably arising out of the facts is that inherited capacities are in every class of society so masked by the effects of nurture, good or bad, that we have as yet no means of determining, in any individual case, how much is due to inheritance and how much to nurture. There is here an unlimited field for detailed study.

Next, fertility. It is, I suppose, on the whole, true that the less opulent classes are more fertile than the more opulent. But I am not prepared to accept the assumption that the economically "upper classes" coincide with the biologically "upper classes." May it not be that the relatively infertile "upper classes" (economical) are only the biological limit of the "lower classes," from which the "upper" are continually recruited? Until the economically "lower classes" are analyzed in such detail as will enable us to eliminate what is due to bad environment, we cannot come to final conclusions on the relative fertility or infertility of "upper" or "lower." Until such an analysis is made, we cannot well assume that the difference in fertility is in any degree due to fundamental biological differences or modifications. Dr. Noel Paton has recently shown that starved mothers produce starved offspring and that well-fed mothers produce well-fed offspring. In his particular experiment with guinea pigs the numbers of offspring were unaffected. If this experiment should be verified on the large scale, it would form some ground for doubting whether the mere increase of comfort directly produces biological infertility. The capacity to reproduce may remain; but reproduction may be limited by a different ethic. The universal fall in the birth-rate has been too rapid to justify simpliciter the conclusion that biological capacity has altered.

When the public-health organizations have succeeded in extirpating the grosser evils of environment, they will, it is hoped, proceed to deal more intimately with the individual. In the present movement for the medical examination and supervision of school children we have an indication of great developments. If to the relatively coarse methods of practical hygienics we could now add the precision of anthropometry, we should find ready to hand in the schools an unlimited quantity of raw material. We might even hope to add some pages to the "golden book" of "thriving families." Incidentally, one might suggest a minor inquiry: Of the large thriving families, do the older or the middle or the younger members show, on the average, the greater ultimate capacity for civic life ? My impression is that, in our present social conditions, the middle children are likely to show the highest percentage of total capacity. This is a mere impression, but it is worth putting to the test of facts.

To the worker in the fighting line, as the public-health officer must always regard himself, Dr. Galton's suggestions come with inspiration and light.

BY G. BERNARD SHAW

I agree with the paper, and go so far as to say that there is now no reasonable excuse for refusing to face the fact that nothing but a eugenic religion can save our civilization from the fate that has overtaken all previous civilizations.

It is worth pointing out that we never hesitate to carry out the negative side of eugenics with considerable zest, both on the scaffold and on the battlefield. We have never deliberately called a human being into existence for the sake of civilization; but we have wiped out millions. We kill a Tibetan regardless of expense, and in defiance of our religion, to clear the way to Lhassa for the Englishman; but we take no really scientific steps to secure that the Englishman when he gets there, will be able to live up to our assumption of his superiority.

Go here to see the original:

"Eugenics: Its Definition, Scope and Aims" by Francis Galton

Neo-Darwinism – Wikipedia

Neo-Darwinism is the interpretation of Darwinian evolution through natural selection as it has variously been modified since it was first proposed. It was early on used to name Charles Darwin's ideas of natural selection separated from his hypothesis of pangenesis as a Lamarckian source of variation involving blending inheritance.[1]

In the early 20th century, the concept became associated with the modern synthesis of natural selection and Mendelian genetics that took place at that time.

In the late 20th century and into the 21st century, neo-Darwinism denoted any strong advocacy of Darwin's thinking, such as the gene-centered view of evolution.

As part of the disagreement about whether natural selection alone was sufficient to explain speciation, in 1880, Samuel Butler called Alfred Russel Wallace's view neo-Darwinism.[2][3]

The term was again used by George Romanes in 1895 to refer to the version of evolution advocated by Wallace and August Weismann with its heavy dependence on natural selection.[4]

Weismann and Wallace rejected the Lamarckian idea of inheritance of acquired characteristics that even Darwin took for granted.[5][6] The term was first used to explain that evolution occurs solely through natural selection, and not by the inheritance of acquired characteristics resulting from use or disuse.[7] The basis for the complete rejection of Lamarckism was Weismann's germ plasm theory. Weismann realised that the cells that produce the germ plasm, or gametes (such as sperm and egg in animals), separate from the somatic cells that go on to make other body tissues at an early stage in development. Since he could see no obvious means of communication between the two, he asserted that the inheritance of acquired characteristics was therefore impossible; a conclusion now known as the Weismann barrier.[8]

From the 1880s to the 1930s, the term continued to be applied to the panselectionist school of thought, which argued that natural selection was the main and perhaps sole cause of all evolution.[9] From then until around 1947, the term was used for the panselectionist followers of Ronald Fisher.

Following the development, from about 1918 to 1947, of the modern synthesis of evolutionary biology, the term neo-Darwinian was often used to refer to that contemporary evolutionary theory.[10][11]

Biologists however have not limited their application of the term neo-Darwinism to the historical modern synthesis. For example, Ernst Mayr wrote in 1984 that "the term neo-Darwinism for the synthetic theory [the modern synthesis of the early 20th century] is wrong, because the term neo-Darwinism was coined by Romanes in 1895 as a designation of Weismann's theory."[12][1][7][13] Publications such as Encyclopdia Britannica similarly use neo-Darwinism to refer to current evolutionary theory, not the version current during the early 20th century synthesis.[14] Richard Dawkins and Stephen Jay Gould have used the term in their writings and lectures to denote the forms of evolutionary biology that were contemporary when they were writing.[15][16]

Follow this link:

Neo-Darwinism - Wikipedia

Superintelligence survey – Future of Life Institute

Click here to see this page in other languages: FrenchGermanJapaneseRussian

Max Tegmarks new book on artificial intelligence, Life 3.0: Being Human in the Age of Artificial Intelligence, explores how AI will impact life as it grows increasingly advanced, perhaps even achieving superintelligence far beyond human level in all areas. For the book, Max surveys experts forecasts, and explores a broad spectrum of views on what will/should happen. But its time to expand the conversation. If were going to create a future that benefits as many people as possible, we need to include as many voices as possible. And that includes yours! Below are the answers from the first 14,866 people who have taken the survey that goes along with Maxs book. To join the conversation yourself, please take the survey here.

The first big controversy, dividing even leading AI researchers, involves forecasting what will happen. When, if ever, will AI outperform humans at all intellectual tasks, and will it be a good thing?

Everything we love about civilization is arguably the product of intelligence, so we can potentially do even better by amplifying human intelligence with machine intelligence. But some worry that superintelligent machines would end up controlling us and wonder whether their goals would be aligned with ours. Do you want there to be superintelligent AI, i.e., general intelligence far beyond human level?

In his book, Tegmark argues that we shouldnt passively ask what will happen? as if the future is predetermined, but instead ask what we want to happen and then try to create that future. What sort of future do you want?

If superintelligence arrives, who should be in control?

If you one day get an AI helper, do you want it to be conscious, i.e., to have subjective experience (as opposed to being like a zombie which can at best pretend to be conscious)?

What should a future civilization strive for?

Do you want life spreading into the cosmos?

In Life 3.0, Max explores 12 possible future scenarios, describing what might happen in the coming millennia if superintelligence is/isnt developed. You can find a cheatsheet that quickly describes each here, but for a more detailed look at the positives and negatives of each possibility, check out chapter 5 of the book. Heres a breakdown so far of the options people prefer:

You can learn a lot more about these possible future scenarios along with fun explanations about what AI is, how it works, how its impacting us today, and what else the future might bring when you order Maxs new book.

The results above will be updated regularly. Please add your voice by taking the survey here, and share your comments below!

Read the rest here:

Superintelligence survey - Future of Life Institute

Genetic predisposition – Wikipedia

A genetic predisposition is a genetic characteristic which influences the possible phenotypic development of an individual organism within a species or population under the influence of environmental conditions. In medicine, genetic susceptibility to a disease refers to a genetic predisposition to a health problem,[1] which may eventually be triggered by particular environmental or lifestyle factors, such as tobacco smoking or diet. Genetic testing is able to identify individuals who are genetically predisposed to certain diseases.

Predisposition is the capacity we are born with to learn things such as language and concept of self. Negative environmental influences may block the predisposition (ability) we have to do some things. Behaviors displayed by animals can be influenced by genetic predispositions. Genetic predisposition towards certain human behaviors is scientifically investigated by attempts to identify patterns of human behavior that seem to be invariant over long periods of time and in very different cultures.

For example, philosopher Daniel Dennett has proposed that humans are genetically predisposed to have a theory of mind because there has been evolutionary selection for the human ability to adopt the intentional stance.[1] The intentional stance is a useful behavioral strategy by which humans assume that others have minds like their own. This assumption allows you to predict the behavior of others based on personal knowledge of what you would do.

In 1951, Hans Eysenck and Donald Prell published an experiment in which identical (monozygotic) and fraternal (dizygotic) twins, ages 11 and 12, were tested for neuroticism. It is described in detail in an article published in the Journal of Mental Science. in which Eysenck and Prell concluded that, "The factor of neuroticism is not a statistical artifact, but constitutes a biological unit which is inherited as a whole....neurotic Genetic predisposition is to a large extent hereditarily determined."[2]

E. O. Wilson's book on sociobiology and his book Consilience discuss the idea of genetic predisposition to behaviors

The field of evolutionary psychology explores the idea that certain behaviors have been selected for during the course of evolution.

The Genetic Information Nondiscrimination Act, which was signed into law by President Bush on May 21, 2008,[3] prohibits discrimination in employment and health insurance based on genetic information.

Here is the original post:

Genetic predisposition - Wikipedia

Ayn Rand’s Ideas – An Overview | AynRand.org

Ayn Rand wrote volumes urging people to be selfishThe Objectivist ethics proudly advocates and upholds rational selfishnesswhich means: the values required for mans survival qua manwhich means: the values required for human survivalnot the values produced by the desires, the emotions, the aspirations, the feelings, the whims or the needs of irrational brutes, who have never outgrown the primordial practice of human sacrifices, have never discovered an industrial society and can conceive of no self-interest but that of grabbing the loot of the moment...The Objectivist Ethics, 31View Full Lexicon Entry.

What? Arent people already too selfish? Just do whatever you feel like, be a thoughtless jerk, and exploit people to get ahead. Easy, right? Except that acting thoughtlessly and victimizing others, Rand claims, is not in your self-interest.

What Rand advocates is an approach to life thats unlike anything youve ever heard before. Selfishness, in her philosophy, means:

At the dawn of our lives, writes Rand, we seek a noble vision of mans nature and of lifes potential. Rands philosophy is that vision. Explore it for yourself.

View original post here:

Ayn Rand's Ideas - An Overview | AynRand.org

Comet | astronomy | Britannica.com

HistoryAncient Greece to the 19th century

The Greek philosopher Aristotle thought that comets were dry exhalations of Earth that caught fire high in the atmosphere or similar exhalations of the planets and stars. However, the Roman philosopher Seneca thought that comets were like the planets, though in much larger orbits. He wrote:

The man will come one day who will explain in what regions the comets move, why they diverge so much from the other stars, what is their size and their nature.

Aristotles view won out and persisted until 1577, when Danish astronomer Tycho Brahe attempted to use parallax to triangulate the distance to a bright comet. Because he could not measure any parallax, Brahe concluded that the comet was very far away, at least four times farther than the Moon.

Brahes student, German astronomer Johannes Kepler, devised his three laws of planetary motion using Brahes meticulous observations of Mars but was unable to fit his theory to the very eccentric orbits of comets. Kepler believed that comets traveled in straight lines through the solar system. The solution came from English scientist Isaac Newton, who used his new law of gravity to calculate a parabolic orbit for the comet of 1680. A parabolic orbit is open, with an eccentricity of exactly 1, meaning the comet would never return. (A circular orbit has an eccentricity of 0.) Any less-eccentric orbits are closed ellipses, which means a comet would return.

Newton was friends with English astronomer Edmond Halley, who used Newtons methods to determine the orbits for 24 observed comets, which he published in 1705. All the orbits were fit with parabolas because the quality of the observations at that time was not good enough to determine elliptical or hyperbolic orbits (eccentricities greater than 1). But Halley noted that the comets of 1531, 1607, and 1682 had remarkably similar orbits and had appeared at approximately 76-year intervals. He suggested that it was really one comet in an approximately 76-year orbit that returned at regular intervals. Halley predicted that the comet would return again in 1758. He did not live to see his prediction come true, but the comet was recovered on Christmas Day, 1758, and passed closest to the Sun on March 13, 1759. The comet was the first recognized periodic comet and was named in Halleys honour, Comet Halley.

Halley also speculated whether comets were members of the solar system or not. Although he could only calculate parabolic orbits, he suggested that the orbits were actually eccentric and closed, writing:

For so their Number will be determinate and, perhaps, not so very great. Besides, the Space between the Sun and the fixd Stars is so immense that there is Room enough for a Comet to revolve tho the period of its Revolution be vastly long.

The German astronomer Johann Encke was the second person to recognize a periodic comet. He determined that a comet discovered by French astronomer Jean-Louis Pons in 1818 did not seem to follow a parabolic orbit. He found that the orbit was indeed a closed ellipse. Moreover, he showed that the orbital period of the comet around the Sun was only 3.3 years, still the shortest orbital period of any comet on record. Encke also showed that the same comet had been observed by French astronomer Pierre Mchain in 1786, by British astronomer Caroline Herschel in 1795, and by Pons in 1805. The comet was named in Enckes honour, as Comet Halley was named for the astronomer who described its orbit.

Enckes Comet soon presented a new problem for astronomers. Because it returned so often, its orbit could be predicted precisely based on Newtons law of gravity, with effects from gravitational perturbations by the planets taken into account. But Enckes Comet repeatedly arrived about 2.5 hours too soon. Its orbit was slowly shrinking. The problem became even more complex when it was discovered that other periodic comets arrived too late. Those include the comets 6P/DArrest, 14P/Wolf 1, and even 1P/Halley, which typically returns about four days later than a purely gravitational orbit would predict.

Several explanations were suggested for this phenomenon, such as a resisting interplanetary medium that caused the comet to slowly lose orbital energy. However, that idea could not explain comets whose orbits were growing, not shrinking. German mathematician and astronomer Friedrich Bessel suggested that expulsion of material from a comet near perihelion was acting like a rocket motor and propelling the comet into a slightly shorter- (or longer-) period orbit each time it passed close to the Sun. History would prove Bessel right.

As the quality of the observations and mathematical techniques to calculate orbits improved, it became obvious that most comets were on elliptical orbits and thus were members of the solar system. Many were recognized to be periodic. But some orbit solutions for long-period comets suggested that they were slightly hyperbolic, suggesting that they came from interstellar space. That problem would not be solved until the 20th century.

Another interesting problem for astronomers was a comet discovered in 1826 by the Austrian military officer and astronomer Wilhelm, Freiherr (baron) von Biela. Calculation of its orbit showed that it, like Enckes Comet, was a short-period comet; it had a period of about 6.75 years. It was only the third periodic comet to be confirmed. It was identified with a comet observed by French astronomers Jacques Lebaix Montaigne and Charles Messier in 1772 and by Pons in 1805, and it returned, as predicted, in 1832. In 1839 the comet was too close in the sky to the Sun and could not be observed, but it was seen again on schedule in November 1845. On January 13, 1846, American astronomer Matthew Maury found that there was no longer a single comet: there were two, following each other closely around the Sun. The comets returned as a pair in 1852 but were never seen again. Searches for the comets in 1865 and 1872 were unsuccessful, but a brilliant meteor shower appeared in 1872 coming from the same direction from which the comets should have appeared. Astronomers concluded that the meteor shower was the debris of the disrupted comets. However, they were still left with the question as to why the comet broke up. That recurring meteor shower is now known as the Andromedids, named for the constellation in the sky where it appears to radiate from, but is also sometimes referred to as the Bielids.

The study of meteor showers received a huge boost on November 12 and 13, 1833, when observers saw an incredible meteor shower, with rates of hundreds and perhaps thousands of meteors per hour. That shower was the Leonids, so named because its radiant (or origin) is in the constellation Leo. It was suggested that Earth was encountering interplanetary debris spread along the Earth-crossing orbits of yet unknown bodies in the solar system. Further analysis showed that the orbits of the debris were highly eccentric.

American mathematician Hubert Newton published a series of papers in the 1860s in which he examined historical records of major Leonid meteor showers and found that they occurred about every 33 years. That showed that the Leonid particles were not uniformly spread around the orbit. He predicted another major shower for November 1866. As predicted, a large Leonid meteor storm occurred on November 13, 1866. In the same year, Italian astronomer Giovanni Schiaparelli computed the orbit of the Perseid meteor shower, usually observed on August 1012 each year, and noted its strong similarity to the orbit of Comet Swift-Tuttle (109P/1862 O1) discovered in 1862. Soon after, the Leonids were shown to have an orbit very similar to Comet Tempel-Tuttle (55P/1865 Y1), discovered in 1865. Since then the parent comets of many meteoroid streams have been identified, though the parent comets of some streams remains a mystery.

Meanwhile, the study of comets benefitted greatly from the improvement in the quality and size of telescopes and the technology for observing comets. In 1858 English portrait artist William Usherwood took the first photograph of a comet, Comet Donati (C/1858 L1), followed by American astronomer George Bond the next night. The first photographic discovery of a comet was made by American astronomer Edward Barnard in 1892, while he was photographing the Milky Way. The comet, which was in a short-period orbit, was known as D/Barnard 3 because it was soon lost, but it was recovered by Italian astronomer Andrea Boattini in 2008 and is now known as Comet Barnard/Boattini (206P/2008 T3). In 1864 Italian astronomer Giovanni Donati was the first to look at a comet through a spectroscope, and he discovered three broad emission bands that are now known to be caused by long-chain carbon molecules in the coma. The first spectrogram (a spectrum recorded on film) was of Comet Tebbutt (C/1881 K1), taken by English astronomer William Huggins on June 24, 1881. Later the same night, an American doctor and amateur astronomer, Henry Draper, took spectra of the same comet. Both men later became professional astronomers.

Some years before the appearance of Comet Halley in 1910, the molecule cyanogen was identified as one of the molecules in the spectra of cometary comae. Cyanogen is a poisonous gas derived from hydrogen cyanide (HCN), a well-known deadly poison. It was also detected in Halleys coma as that comet approached the Sun in 1910. That led to great consternation as Earth was predicted to pass through the tail of the comet. People panicked, bought comet pills, and threw end-of-the-world parties. But when the comet passed by only 0.15 AU away on the night of May 1819, 1910, there were no detectable effects.

The 20th century saw continued progress in cometary science. Spectroscopy revealed many of the molecules, radicals, and ions in the comae and tails of comets. An understanding began to develop about the nature of cometary tails, with the ion (Type I) tails resulting from the interaction of ionized molecules with some form of corpuscular radiation, possibly electrons and protons, from the Sun, and the dust (Type II) tails coming from solar radiation pressure on the fine dust particles emitted from the comet.

Astronomers continued to ask, Where do the comets come from? There were three schools of thought: (1) that comets were captured from interstellar space, (2) that comets were erupted out of the giant planets, or (3) that comets were primeval matter that had not been incorporated into the planets. The first idea had been suggested by French mathematician and astronomer Pierre Laplace in 1813, while the second came from another French mathematician-astronomer, Joseph Lagrange. The third came from English astronomer George Chambers in 1910.

The idea of an interstellar origin for comets ran into some serious problems. First, astronomers showed that capture of an interstellar comet by Jupiter, the most massive planet, was a highly unlikely event and probably could not account for the number of short-period comets then known. Also, no comets had ever been observed on truly hyperbolic orbits. Some long-period comets did have orbit solutions that were slightly hyperbolic, barely above an eccentricity of 1.0. But a truly hyperbolic comet approaching the solar system with the Suns velocity relative to the nearby stars of about 20 km (12 miles) per second would have an eccentricity of 2.0.

In 1914 Swedish-born Danish astronomer Elis Strmgren published a special list of cometary orbits. Strmgren took the well-determined orbits of long-period comets and projected them backward in time to before the comets had entered the planetary region. He then referenced the orbits to the barycentre (the centre of mass) of the entire solar system. He found that most of the apparently hyperbolic orbits became elliptical. That proved that the comets were members of the solar system. Orbits of that type are referred to as original orbits, whereas the orbit of a comet as it passes through the planetary region is called the osculating (or instantaneous) orbit, and the orbit after the comet has left the planetary region is called the future orbit.

The idea of comets erupting from giant planets was favoured by the Soviet astronomer Sergey Vsekhsvyatsky based on similar molecules having been discovered in both the atmospheres of the giant planets and in cometary comae. The idea helped to explain the many short-period comets that regularly encountered Jupiter. But the giant planets have very large escape velocities, about 60 km (37 miles) per second in the case of Jupiter, and it was difficult to understand what physical process could achieve those velocities. So Vsekhsvyatsky moved the origin sites to the satellites of the giant planets, which had far lower escape velocities. However, most scientists still did not believe in the eruption model. The discovery of volcanos on Jupiters large satellite Io by the Voyager 1 spacecraft in 1979 briefly resurrected the idea, but Ios composition proved to be a very poor match to the composition of comets.

Another idea about cometary origins was promoted by the English astronomer Raymond Lyttleton in a research paper in 1951 and a book, The Comets and Their Origin, in 1953. Because it was known that some comets were associated with meteor showers observed on Earth, the sandbank model suggested that a comet was simply a cloud of meteoritic particles held together by its own gravity. Interplanetary gases were adsorbed on the surfaces of the dust grains and escaped when the comet came close to the Sun and the particles were heated. Lyttleton went on to explain that comets were formed when the Sun and solar system passed through an interstellar dust cloud. The Suns gravity focused the passing dust in its wake, and these subclouds then collapsed under their own gravity to form the cometary sandbanks.

One problem with that theory was that Lyttleton estimated that the gravitational focusing by the Sun would bring the particles together only about 150 AU behind the Sun and solar system. But that did not agree well with the known orbits of long-period comets, which showed no concentration of comets that would have formed at that distance or in that direction. In addition, the total amount of gases that could be adsorbed on a sandbank cloud was not sufficient to explain the measured gas production rates of many observed comets.

In 1948 Dutch astronomer Adrianus van Woerkom, as part of his Ph.D. thesis work at the University of Leiden, examined the role of Jupiters gravity in changing the orbits of comets as they passed through the planetary system. He showed that Jupiter could scatter the orbits in energy, leading to either longer or shorter orbital periods and correspondingly to larger or smaller orbits. In some cases the gravitational perturbations from Jupiter were sufficient to change the previously elliptical orbits of the comets to hyperbolic, ejecting them from the solar system and sending them into interstellar space. Van Woerkom also showed that because of Jupiter, repeated passages of comets through the solar system would lead to a uniform distribution in orbital energy for the long-period comets, with as many long-period comets ending in very long-period orbits as in very short-period orbits. Finally, van Woerkom showed that Jupiter would eventually eject all the long-period comets to interstellar space over a time span of about one million years. Thus, the comets needed to be resupplied somehow.

Van Woerkoms thesis adviser was the Dutch astronomer Jan Oort, who had become famous in the 1920s for his work on the structure and rotation of the Milky Way Galaxy. Oort became interested in the problem of where the long-period comets came from. Building on van Woerkoms work, Oort closely examined the energy distribution of long-period comet original orbits as determined by Strmgren. He found that, as van Woerkom had predicted, there was a uniform distribution of orbital energies for most energy values. But, surprisingly, there was also a large excess of comets with orbital semimajor axes (half of the long axis of the comets elliptical orbit) larger than 20,000 AU.

Oort suggested that the excess of orbits at very large distances could only be explained if the long-period comets came from there. He proposed that the solar system was surrounded by a vast cloud of comets that stretched halfway to the nearest stars. He showed that gravitational perturbations by random passing stars would perturb the orbits in the comet cloud, occasionally sending a comet into the planetary region where it could be observed. Oort referred to those comets making their first passage through the planetary region as new comets. As the new comets pass through the planetary region, Jupiters gravity takes control of their orbits, spreading them in orbital energy, and either capturing them to shorter periods or ejecting them to interstellar space.

Based on the number of comets seen each year, Oort estimated that the cloud contained 190 billion comets; today that number is thought to be closer to one trillion comets. Oorts hypothesis was all the more impressive because it was based on accurate original orbits for only 19 comets. In his honour, the cloud of comets surrounding the solar system is called the Oort cloud.

Oort noticed that the number of long-period comets returning to the planetary system was far less than what his model predicted. To account for that, he suggested that the comets were physically lost by disruption (as had happened to Bielas Comet). Oort proposed two values for the disruption rate of comets on each perihelion passage, 0.3 and 1.9 percent, which both gave reasonably good results when comparing his predictions with the actual energy distribution, except for an excess of new comets at near-zero energy.

In 1979 American astronomer Paul Weissman (the author of this article) published computer simulations of the Oort cloud energy distribution using planetary perturbations by Jupiter and Saturn and physical models of loss mechanisms such as random disruption and formation of a nonvolatile crust, based on actual observations of comets. He showed that a very good agreement with the observed energy distribution could be obtained if new comets were disrupted about 10 percent of the time on the first perihelion passage from the Oort cloud and about 4 percent of the time on subsequent passages. Also, comet nuclei developed nonvolatile crusts, cutting off all coma activity, after about 10100 returns, on average.

In 1981 American astronomer Jack Hills suggested that in addition to the Oort cloud there was also an inner cloud extending inward toward the planetary region to about 1,000 AU from the Sun. Comets are not seen coming from this region because their orbits are too tightly bound to the Sun; stellar perturbations are typically not strong enough to change their orbits significantly. Hills hypothesized that only if a star came very close, even penetrating through the Oort cloud, could it excite the orbits of the comets in the inner cloud, sending a shower of comets into the planetary system.

But where did the Oort cloud come from? At large distances on the order of 104105 AU from the Sun, the solar nebula would have been too thin to form large bodies like comets that are several kilometres in diameter. The comets had to have formed much closer to the planetary region. Oort suggested that the comets were thrown out of the asteroid belt by close encounters with Jupiter. At that time it was not known that most asteroids are rocky, carbonaceous, or iron bodies and that only a fraction contain any water.

Oorts work was preceded in part by that of the Estonian astronomer Ernst pik. In 1932 pik published a paper examining what happened to meteors or comets scattered to very large distances from the Sun, where they could be perturbed by random passing stars. He showed that the gravitational tugs from the stars would raise the perihelion distances of most objects to beyond the most distant planet. Thus, he predicted that there would be a cloud of comets surrounding the solar system. However, pik said little about the comets returning to the planetary region, other than that some comets could be thrown into the Sun by the stars during their evolution outward to the cloud. Indeed, pik concluded:

comets of an aphelion distance exceeding 10,000 a.u., are not very likely to occur among the observable objects, because of the rapid increase of the average perihelion distance due to stellar perturbations.

pik also failed to make any comparison between his results and the known original orbits of the long-period comets.

Oorts paper, published in 1950, revolutionized the field of cometary dynamics. Two months later a paper on the nature of the cometary nucleus by Fred Whipple would do the same for cometary physics. Whipple combined many of the ideas of the day and suggested that the cometary nucleus was a solid body made up of volatile ices and meteoritic material. That was called the icy conglomerate model but also became more popularly known as the dirty snowball.

Whipple provided proof for his model in the form of the shrinking orbit of Enckes Comet. Whipple believed that, as Bessel had suggested, rocket forces from sublimating ices on the sunlit side of the nucleus would alter the comets orbit. For a nonrotating solid nucleus, the force would push the nucleus away from the Sun, appearing to lessen the effect of gravity. But if the comet nucleus was rotating (as most solar system bodies do) and if the rotation pole was not perpendicular to the plane of the comets orbit, both tangential forces (forward or backward along the comets direction of motion) and out-of-plane forces (up or down) could result. The effect was helped by the thermal lag caused by the Sun continuing to heat the nucleus surface after local noontime, just as temperatures on Earth are usually at their maximum a few hours after local noon.

Thus, Whipple explained the slow shrinking of Enckes orbit as the result of tangential forces that were pointed opposite to the comets direction of motion, causing the comet nucleus to slow down, slowly shrinking the orbit. That model also explained periodic comets whose orbits were growing, such as DArrest and Wolf 1, depending on the direction of the nucleis rotation poles and the direction in which the nuclei were rotating. Because the rocket force results from the high activity of the comet nucleus near perihelion, the force does not change the perihelion distance but rather the aphelion distance, either raising or lowering it.

Whipple also pointed out that the loss of cometary ices would leave a layer of nonvolatile material on the surface of the nucleus, making sublimation more difficult, as the heat from the Sun needed to filter down through multiple layers to where there were fresh ices. Furthermore, Whipple suggested that the solar systems zodiacal dust cloud came from dust released by comets as they passed through the planetary system.

Whipples ideas set off an intense debate over whether the nucleus was a solid body or not. Many scientists still advocated Lyttletons idea of a sandbank nucleus, simply a cloud of meteoritic material with adsorbed gases. The question would not be put definitively to rest until the first spacecraft encounters with Halleys Comet in 1986.

Solid proof for Whipples nongravitational force model came from English astronomer Brian Marsden, a colleague of Whipples at the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts. Marsden was an expert on comet and asteroid orbits and tested Whipples icy conglomerate model against the orbits of many known comets. Using a computer program that determined the orbits of comets and asteroids from observations, Marsden added a term for the expected rocket effect when the comet was active. In this he was aided by Belgian astronomer Armand Delsemme, who carefully calculated the rate of water ice sublimation as a function of a comets distance from the Sun.

When one calculates an orbit for an object, the calculation usually does not fit all the observed positions of the object perfectly. Small errors creep into the observed positions for many reasons, such as not knowing the exact time of the observations or finding the positions using an out-of-date star catalog. So every orbit fit has a mean residual, which is the average difference between the observations and the comets predicted position based on the newly determined orbit. Mean residuals of less than about 1.5 arc seconds are considered a good fit.

When Marsden calculated the comet orbits, he found that he could obtain smaller mean residuals if he included the rocket force in his calculations. Marsden found that for a short-period comet, the magnitude of the rocket force was typically only a few hundred-thousandths of the solar gravitational attraction, but that was enough to change the time when the comet would return. Later, Marsden and colleagues computed the rocket forces for long-period comets and found that there too the mean residuals were reduced. For the long-period comets, the rocket force was typically a few ten-thousandths of the solar gravitational attraction. Long-period comets tend to be far more active than short-period comets, and thus for them the force is larger.

A further interesting result of Marsdens work was that when he performed his calculations on apparently hyperbolic comet orbits, the resulting eccentricities often changed from hyperbolic to elliptical. Very few comets were left with hyperbolic original orbits, and all of those were only slightly hyperbolic. Marsden had provided further proof that all long-period comets were members of the solar system.

In 1951 the Dutch American astronomer Gerard Kuiper published an important paper on where the comets had formed. Kuiper was studying the origin of the solar system and suggested that the volatile molecules, radicals, and ions observed in cometary comae and tails (e.g., CH, NH, OH, CN, CO+, CO2+, N2+) must come from ices frozen in the solid nucleus (e.g., CH4, NH3, H2O, HCN, CO, CO2, and N2). But those ices could only condense in the solar nebula where it was very cold. So he suggested that comets had formed at 3850 AU from the Sun, where mean temperatures were only about 3045 K (243 to 228 C, or 406 to 379 F).

Kuiper suggested that the solar nebula did not end at the orbit of what was then considered the most distant planet, Pluto, at about 39 AU, but that it continued on to about 50 AU. He believed that at those large distances from the Sun neither the density of solar nebula material nor the time was enough to form another planet. Rather, he suggested that there would be a belt of smaller bodiesi.e., cometsbetween 38 and 50 AU. He also suggested that Pluto would dynamically eject comets from that region to distant orbits, forming the Oort cloud.

Astronomers have since discovered that Pluto is too small to have done that job (or even to be considered a planet), and it is really Neptune at 30 AU that defines the outer boundary of the planetary system. Neptune is large enough to slowly scatter comets both inward to short-period orbits and outward to the Oort cloud, along with some help from the other giant planets.

Kuipers 1951 paper did not achieve the same fame as those by Oort and Whipple in 1950, but astronomers occasionally followed up his ideas. In 1968 Egyptian astronomer Salah Hamid worked with Whipple and Marsden to study the orbits of seven comets that passed near the region of Kuipers hypothetical comet belt beyond Neptune. They found no evidence of gravitational perturbations from the belt and set upper limits on the mass of the belt of 0.5 Earth masses out to 40 AU and 1.3 Earth masses out to 50 AU.

The situation changed in 1980 when Uruguayan astronomer Julio Fernndez suggested that a comet belt beyond Neptune would be a good source for the short-period comets. Up until that time it was thought that short-period comets were long-period comets from the Oort cloud that had dynamically evolved to short-period orbits because of planetary perturbations, primarily by Jupiter. But astronomers who tried to simulate that process on computers found that it was very inefficient and likely could not supply new short-period comets fast enough to replace the existing ones that either were disrupted, faded away, or were perturbed out of the planetary region.

Fernndez recognized that a key element in understanding the short-period comets was their relatively low-inclination orbits. Typical short-period comets have orbital inclinations up to about 35, whereas long-period comets have completely random orbital inclinations from 0 to 180. Fernndez suggested that the easiest way to produce a low-inclination short-period comet population was to start with a source that had a relatively low inclination. Kuipers hypothesized comet belt beyond Neptune fit this requirement. Fernndez used dynamical simulations to show how comets could be perturbed by larger bodies in the comet belt, on the order of the size of Ceres, the largest asteroid (diameter of about 940 km [580 miles]), and be sent into orbits that could encounter Neptune. Neptune then could pass about half of the comets inward to Uranus, with the other half being sent outward to the Oort cloud. In that manner, comets could be handed down to each giant planet and finally to Jupiter, which placed the comets in short-period orbits.

Fernndezs paper renewed interest in a possible comet belt beyond Neptune. In 1988 American astronomer Martin Duncan and Canadian astronomers Thomas Quinn and Scott Tremaine built a more complex computer simulation of the trans-Neptunian comet belt and again showed that it was the likely source of the short-period comets. They also proposed that the belt be named in honour of Gerard Kuiper, based on the predictions of his 1951 paper. As fate would have it, the distant comet belt had also been predicted in two lesser-known papers in 1943 and 1949 by a retired Irish army officer and astronomer, Kenneth Edgeworth. Therefore, some scientists refer to the comet belt as the Kuiper belt, while others call it the Edgeworth-Kuiper belt.

Astronomers at observatories began to search for the distant objects. In 1992 they were finally rewarded when British astronomer David Jewitt and Vietnamese American astronomer Jane Luu found an object well beyond Neptune in an orbit with a semimajor axis of 43.9 AU, an eccentricity of only 0.0678, and an inclination of only 2.19. The object, officially designated (15760) 1992 QB1, has a diameter of about 200 km (120 miles). Since 1992 more than 1,500 objects have been found in the Kuiper belt, some almost as large as Pluto. In fact, it was the discovery of that swarm of bodies beyond Neptune that led to Pluto being recognized in 2006 as simply one of the largest bodies in the swarm and no longer a planet. (The same thing happened to the largest asteroid Ceres in the mid-19th century when it was recognized as simply the largest body in the asteroid belt and not a true planet.)

In 1977 American astronomer Charles Kowal discovered an unusual object orbiting the Sun among the giant planets. Named 2060 Chiron, it is about 200 km (120 miles) in diameter and has a low-inclination orbit that stretches from 8.3 AU (inside the orbit of Saturn) to 18.85 AU (just inside the orbit of Uranus). Because it can make close approaches to those two giant planets, the orbit is unstable on a time span of several million years. Thus, Chiron likely came from somewhere else. Even more interesting, several years later Chiron began to display a cometary coma even though it was still very far from the Sun. Chiron is one of a few objects that appear in both asteroid and comet catalogs; in the latter it is designated 95 P/Chiron.

Chiron was the first of a new class of objects in giant-planet-crossing orbits to be discovered. The searches for Kuiper belt objects have also led to the discovery of many similar objects orbiting the Sun among the giant planets. Collectively they are now known as the Centaur objects. About 300 such objects have now been found, and more than a few also show sporadic cometary activity.

The Centaurs appear to be objects that are slowly diffusing into the planetary region from the Kuiper belt. Some will eventually be seen as short-period comets, while most others will be thrown into long-period orbits or even ejected to interstellar space.

In 1996 European astronomers Eric Elst and Guido Pizarro found a new comet, which was designated 133P/Elst-Pizarro. But when the orbit of the comet was determined, it was found to lie in the outer asteroid belt with a semimajor axis of 3.16 AU, an eccentricity of 0.162, and an inclination of only 1.39. A search of older records showed that 133P had been observed previously in 1979 as an inactive asteroid. So it is another object that was catalogued as both a comet and an asteroid.

The explanation for 133P was that, given its position in the asteroid belt, where maximum solar surface temperatures are only about 48 C (54 F), it likely acquired some water in the form of ice from the solar nebula. Like in comets, the ices near the surface of 133P sublimated early in its history, leaving an insulating layer of nonvolatile material covering the ice at depth. Then a random impact from a piece of asteroidal debris punched through the insulating layer and exposed the buried ice. Comet 133P has shown regular activity at the same location in its orbit for at least three orbits since it was discovered.

Twelve additional objects in asteroidal orbits have been discovered since that time, most of them also in the outer main belt. They are sometimes referred to as main belt comets, though the more recently accepted term is active asteroids.

The latter half of the 20th century saw a massive leap forward in the understanding of the solar system as a result of spacecraft visits to the planets and their satellites. Those spacecraft collected a wealth of scientific data close up and in situ. The anticipated return of Halleys Comet in 1986 provided substantial motivation to begin using spacecraft to study comets.

The first comet mission (of a sort) was the International Cometary Explorer (ICE) spacecrafts encounter with Comet 21P/Giacobini-Zinner on September 11, 1985. The mission had originally been launched as part of a joint project by the U.S. National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA) known as the International Sun-Earth Explorer (ISEE). The mission consisted of three spacecraft, two of them, ISEE-1 and -2, in Earth orbit and the third, ISEE-3, positioned in a heliocentric orbit between Earth and the Sun, studying the solar wind in Earths vicinity.

In 1982 and 1983 engineers maneuvered ISEE-3 to accomplish several gravity-assist encounters with the Moon, which put it on a trajectory to encounter 21P/Giacobini-Zinner. The spacecraft was targeted to pass through the ion tail of the comet, about 7,800 km (4,800 miles) behind the nucleus at a relative velocity of 21 km (13 miles) per second, and returned the first in situ measurements of the magnetic field, plasma, and energetic particle environment inside a comets tail. Those measurements confirmed the model of the comets ion tail first put forward in 1957 by the Swedish physicist (and later Nobel Prize winner) Hannes Alfvn. It also showed that H2O+ was the most common ion in the plasma tail, consistent with the Whipple model of an icy conglomerate nucleus. However, ICE carried no instruments to study the nucleus or coma of the comet.

In 1986 five spacecraft were sent to encounter Halleys Comet. They were informally known as the Halley Armada and consisted of two Japanese spacecraft, Suisei and Sakigake (Japanese for comet and pioneer, respectively); two Soviet spacecraft, Vega 1 and 2 (a contraction of Venus-Halley using Cyrillic spelling); and an ESA spacecraft, Giotto (named after the Italian painter who depicted the Star of Bethlehem as a comet in a fresco painted in 130506).

Suisei flew by Halley on March 8, 1986, at a distance of 151,000 km (94,000 miles) on the sunward side and produced ultraviolet images of the comets hydrogen corona, an extension of the visible coma seen only in ultraviolet light. It also measured the energetic particle environment in the solar wind ahead of the comet. Sakigakes closest approach to the comet was on March 11, 1986, at a distance of 6.99 million km (4.34 million miles), and it made additional measurements of the solar wind.

Before flying past Halleys Comet, the two Soviet spacecraft had flown by Venus and had each dropped off landers and balloons to study that planet. Vega 1 flew through the Halley coma on March 6, 1986, to within 8,889 km (5,523 miles) of the nucleus and made numerous measurements of the coma gas and dust composition, plasma and energetic particles, and magnetic field environment. It also returned the first picture ever of a solid cometary nucleus. Unfortunately, the camera was slightly out of focus and had other technical problems that required considerable image processing to see the nucleus. Vega 2 fared much better when it flew through the Halley coma on March 9 to within 8,030 km (4,990 miles) of the nucleus, and its images clearly showed a peanut-shaped nucleus about 16 by 8 km (10 by 5 miles) in diameter. The nucleus was also very dark, reflecting only about 4 percent of the incident sunlight, which had already been established from Earth-based observations.

Both Vega spacecraft carried infrared spectrometers designed to measure the temperature of the Halley nucleus. They found quite warm temperatures between 320 and 400 K (47 and 127 C [116 and 260 F]). That surprised many scientists who had predicted that the effect of water ice sublimation would be to cool the nucleuss surface; water ice requires a great deal of heat to sublimate. The high temperatures suggested that much of the nucleuss surface was not sublimating, but why?

Whipples classic paper in 1950 had suggested that as comets lost material from the surface, some particles were too heavy to escape the weak gravity of the nucleus and fell back onto the surface, forming a lag deposit. That idea was later studied by American astronomer and author David Brin in his thesis work with his adviser, Sri Lankan physicist Asoka Mendis, in 1979. As the lag deposit built up, it would effectively insulate the icy materials below it from sunlight. Calculations showed that a layer only 10100 cm (439 inches) in thickness could completely turn off sublimation from the surface. Brin and Mendis predicted that Halley would be so active that it would blow away any lag deposit, but that was not the case. Only about 30 percent of Halleys sunlit hemisphere was active. Bright dust jets could be seen coming from specific areas on the nucleus surface, but much of the surface showed no visible activity.

Giotto flew through Halleys coma on March 14, 1986, and passed only 596 km (370 miles) from the nucleus. It returned the highest-resolution images of the nucleus and showed a very rugged terrain with mountain peaks jutting up hundreds of metres from the surface. It also showed the same peanut shape that Vega 2 saw but from a different viewing angle and with much greater visible detail. Discrete dust jets were coming off the nucleus surface, but the resolution was not good enough to reveal the source of the jets.

Giotto and both Vega spacecraft obtained numerous measurements of the dust and gas in the coma. Dust particles came in two types: silicate and organic. The silicate grains were typical of rocks found on Earth such as forsterite (Mg2SiO4), a high-temperature mineralthat is, one which would be among the first to condense out of the hot solar nebula. Analyses of other grains showed that the comet was far richer in magnesium relative to iron. The organic grains were composed solely of the elements carbon, hydrogen, oxygen, and nitrogen and were called CHON grains based on the chemical symbol for each of those elements. Larger grains were also detected that were combinations of silicate and CHON grains, supporting the view that comet nuclei had accreted from the slow aggregation of tiny particles in the solar nebula.

The three spacecraft also measured gases in the coma, water being the dominant molecule but also carbon monoxide accounting for about 7 percent of the gas relative to water. Formaldehyde, carbon dioxide, and hydrogen cyanide were also detected at a few percent relative to water.

The Halley Armada was a rousing success and resulted from international cooperation by many nations. Its success is even more impressive when one considers that the spacecraft all flew by the Halley nucleus at velocities ranging from 68 to 79 km per second (152,000 to 177,000 miles per hour). (The velocities were so high because Halleys retrograde orbit had it going around the Sun in the opposite direction from the spacecraft.)

Giotto was later retargeted using assists from Earths gravity to pass within about 200 km (120 miles) of the nucleus of the comet 26P/Grigg-Skjellrup. The flyby was successful, but some of the scientific instruments, including the camera, were no longer working after being sandblasted at Halley.

The next comet mission was not until 1998, when NASA launched Deep Space 1, a spacecraft designed to test a variety of new technologies. After flying past the asteroid 9969 Braille in 1999, Deep Space 1 was retargeted to fly past the comet 19P/Borrelly on September 22, 2001. Images of the Borrelly nucleus showed it to be shaped like a bowling pin, with very rugged terrain on parts of its surface and mesa-like formations over a large area of it. Individual dust and gas jets were seen emanating from the surface, but the activity was far less than that of Halleys Comet.

The NASA Stardust mission was launched in 1999 with the goal of collecting samples of dust from the coma of Comet 81P/Wild 2. At a flyby speed of 6.1 km per second (13,600 miles per hour), the dust samples would be completely destroyed by impact with a hard collector. Therefore, Stardust used a material made of silica (sand) called aerogel that had a very low density, approaching that of air. The idea was that the aerogel would slow the dust particles without destroying them, much as a detective might shoot a bullet into a box full of cotton in order to collect the undamaged bullet. It worked, and thousands of fine dust particles were returned to Earth in 2006. Perhaps the biggest surprise was that the sample contained high-temperature materials that must have formed much closer to the Sun than where the comets formed in the outer solar system. That unexpected result meant that material in the solar nebula had been mixed, at least from the inside outward, during the formation of the planets.

Stardusts images of the nucleus of Wild 2 showed a surface that was radically different from either Halley or Borrelly. The surface appeared to be covered with large flat-floored depressions. Those were likely not impact craters, as they did not have the correct morphology and there were far too many large ones. There was some suggestion that it was a very new cometary surface on a nucleus that had not been close to the Sun before. Support for that was the fact that Wild 2 had been placed into its current orbit by a close Jupiter approach in 1974, reducing the perihelion distance to about 1.5 AU (224 million km, or 139 million miles). Before the Jupiter encounter, its perihelion was 4.9 AU (733 million km, or 455 million miles), beyond the region where water ice sublimation is significant.

In 2002 NASA launched a mission called Contour (Comet Nucleus Tour) that was to fly by Enckes Comet and 73P/Schwassman-Wachmann 3 and possibly continue on to 6P/DArrest. Unfortunately, the spacecraft structure failed when leaving Earth orbit.

In 2005 NASA launched yet another comet mission, called Deep Impact. It consisted of two spacecraft, a mother spacecraft that would fly by Comet 9P/Tempel 1 and a daughter spacecraft that would be deliberately crashed into the comet nucleus. The mother spacecraft would take images of the impact. The daughter spacecraft contained its own camera system to image the nucleus surface up to the moment of impact. To maximize the effect of the impact, the daughter spacecraft contained 360 kg (794 pounds) of solid copper. The predicted impact energy was equivalent to 4.8 tonnes of TNT.

The two spacecraft encountered Tempel 1 on July 4, 2005. The impactor produced the highest-resolution pictures of a nucleus surface ever, imaging details less than 10 metres (33 feet) in size. The mother spacecraft watched the explosion and saw a huge cloud of dust and gas emitted from the nucleus. One of the mission goals was to image the crater made by the explosion, but the dust cloud was so thick that the nucleus surface could not be seen through it. Because the mission was a flyby, the mother spacecraft could not wait around for the dust to clear.

Images of the Tempel 1 nucleus were very different from what had been seen before. The surface appeared to be old, with examples of geologic processes having occurred. There was evidence of dust flows across the nucleus surface and what appeared to be two modest-sized impact craters. There was evidence of material having been eroded away. For the first time, icy patches were discovered in some small areas of the nucleus surface.

For the first time, a mission was also able to measure the mass and density of a cometary nucleus. Typically, the nuclei are too small and their gravity too weak to affect the trajectory of the flyby spacecraft. The same was true for Tempel 1, but observations of the expanding dust cloud from the impact could be modeled so as to solve for the nucleus gravity. When combined with the volume of the nucleus as obtained from the camera images, it was shown that the Tempel 1 nucleus had a bulk density between 0.2 and 1.0 gram per cubic centimetre with a preferred value of 0.4 gram per cubic centimetre, less than half that of water ice. The measurement clearly confirmed ideas from telescopic research that comets were not very dense.

After the great success of Stardust and Deep Impact, NASA had additional plans for the spacecraft. Stardust was retargeted to go to Tempel 1 and image the crater from the Deep Impact explosion as well as more of the nucleus surface not seen on the first flyby. Deep Impact was retargeted to fly past 103P/Hartley 2, a small but very active comet.

Deep Impact, in its postimpact EPOXI mission, flew past Comet Hartley 2 on November 4, 2010. It imaged a small nucleus about 2.3 km (1.4 miles) in length and 0.9 km (0.6 mile) wide. As with Halley and Borrelly, the nucleus appeared to be two bodies stuck together, each having rough terrain but covered with very fine, smooth material at the neck where they came together. The most amazing result was that the smaller of the two bodies making up the nucleus was far more active than the larger one. The activity on the smaller body appeared to be driven by CO2 sublimationan unexpected result, given that short-period comets are expected to lose their near-surface CO2 early during their many passages close to the Sun. The other half of the nucleus was far less active and only showed evidence of water ice sublimation. The active half of the comet also appeared to be flinging baseball- to basketball-sized chunks of water ice into the coma, further enhancing the gas production from the comet as they sublimated away.

The EPOXI images also showed that the nucleus was not rotating smoothly but was in complex rotationa state where the comet nucleus rotates but the direction of the rotation pole precesses rapidly, drawing a large circle on the sky. Hartley 2 was the first encountered comet to exhibit complex rotation. It was likely driven by the very high activity from the smaller half of the nucleus, putting large torques on the nucleus rotation.

Stardust/NExT (New Exploration of Tempel 1) flew past Tempel 1 on February 14, 2011, and it imaged the spot where the Deep Impact daughter spacecraft had struck the nucleus. Some scientists believed that they saw evidence of a crater about 150 metres (500 feet) in diameter, but other scientists looked at the same images and saw no clear evidence of a crater. Some of the ambiguity was due to the fact that the Stardust camera was not as sharp as the Deep Impact cameras, and some of it was also due to the fact that sunlight was illuminating the nucleus from a different direction. The debate over whether there was a recognizable crater lingers on.

Among the new areas observed by Stardust-NeXT there was further evidence of geologic processes, including layered terrains. Using stereographic imaging, the scientists traced dust jets observed in the coma back to the nucleus surface, and they appeared to originate from some of the layered terrain. Again, the resolution of the images was not good enough to understand why the jets were coming from that area.

In 2004 ESA launched Rosetta (named after the Rosetta Stone, which had unlocked the secret of Egyptian hieroglyphics) on a trajectory to Comet 67P/Churyumov-Gerasimenko (67P). Rendezvous with 67P took place on August 6, 2014. Along the way, Rosetta successfully flew by the asteroids 2849 Steins and 21 Lutetia and obtained considerable scientific data. Rosetta uses 11 scientific instruments to study the nucleus, coma, and solar wind interaction. Unlike previous comet missions, Rosetta will orbit the nucleus until December 2015, providing a complete view of the comet as activity begins, reaches a maximum at perihelion, and then wanes. Rosetta carried a spacecraft called Philae that landed on the nucleus surface on November 12, 2014. Philae drilled into the nucleus surface to collect samples of the nucleus and analyze them in situ. As the first mission to orbit and land on a cometary nucleus, Rosetta is expected to answer many questions about the sources of cometary activity.

The rest is here:

Comet | astronomy | Britannica.com

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Another Crypto Hack Derails Recovery
Since our last report, hackers broke into yet another cryptocurrency exchange. This time the target was Bithumb, a Korean exchange known for high-flying prices and ultra-active traders.

While the hackers made off with approximately $31.5 million in funds, the exchange is working with relevant authorities to return the stolen tokens to their respective owners. In the event that some is still missing, the exchange will cover the losses. (Source: “Bithumb Working With Other Crypto Exchanges to Recover Hacked Funds,”.

The post Cryptocurrency News: Looking Past the Bithumb Crypto Hack appeared first on Profit Confidential.

Continued here:
Cryptocurrency News: Looking Past the Bithumb Crypto Hack

What are quantum computers and how do they work? WIRED …

Google, IBM and a handful of startups are racing to create the next generation of supercomputers. Quantum computers, if they ever get started, will help us solve problems, like modelling complex chemical processes, that our existing computers can't even scratch the surface of.

But the quantum future isn't going to come easily, and there's no knowing what it'll look like when it does arrive. At the moment, companies and researchers are using a handful of different approaches to try and build the most powerful computers the world has ever seen. Here's everything you need to know about the coming quantum revolution.

Quantum computing takes advantage of the strange ability of subatomic particles to exist in more than one state at any time. Due to the way the tiniest of particles behave, operations can be done much more quickly and use less energy than classical computers.

In classical computing, a bit is a single piece of information that can exist in two states 1 or 0. Quantum computing uses quantum bits, or 'qubits' instead. These are quantum systems with two states. However, unlike a usual bit, they can store much more information than just 1 or 0, because they can exist in any superposition of these values.

"The difference between classical bits and qubits is that we can also prepare qubits in a quantum superposition of 0 and 1 and create nontrivial correlated states of a number of qubits, so-called 'entangled states'," says Alexey Fedorov, a physicist at the Moscow Institute of Physics and Technology.

A qubit can be thought of like an imaginary sphere. Whereas a classical bit can be in two states at either of the two poles of the sphere a qubit can be any point on the sphere. This means a computer using these bits can store a huge amount more information using less energy than a classical computer.

Until recently, it seemed like Google was leading the pack when it came to creating a quantum computer that could surpass the abilities of conventional computers. In a Nature article published in March 2017, the search giant set out ambitious plans to commercialise quantum technology in the next five years. Shortly after that, Google said it intended to achieve something its calling quantum supremacy with a 49-qubit computer by the end of 2017.

Now, quantum supremacy, which roughly refers to the point where a quantum computer can crunch sums that a conventional computer couldnt hope to simulate, isnt exactly a widely accepted term within the quantum community. Those sceptical of Googles quantum project or at least the way it talks about quantum computing argue that supremacy is essentially an arbitrary goal set by Google to make it look like its making strides in quantum when really its just meeting self-imposed targets.

Whether its an arbitrary goal or not, Google was pipped to the supremacy post by IBM in November 2017, when the company announced it had built a 50-qubit quantum computer. Even that, however, was far from stable, as the system could only hold its quantum microstate for 90 microseconds, a record, but far from the times needed to make quantum computing practically viable. Just because IBM has built a 50-qubit system, however, doesnt necessarily mean theyve cracked supremacy and definitely doesnt mean that theyve created a quantum computer that is anywhere near ready for practical use.

Where IBM has gone further than Google, however, is making quantum computers commercially available. Since 2016, it has offered researchers the chance to run experiments on a five-qubit quantum computer via the cloud and at the end of 2017 started making its 20-qubit system available online too.

But quantum computing is by no means a two-horse race. Californian startup Rigetti is focusing on the stability of its own systems rather than just the number of qubits and it could be the first to build a quantum computer that people can actually use. D-Wave, a company based in Vancouver, Canada, has already created what it is calling a 2,000-qubit system although many researchers dont consider the D-wave systems to be true quantum computers. Intel, too, has skin in the game. In February 2018 the company announced that it had found a way of fabricating quantum chips from silicon, which would make it much easier to produce chips using existing manufacturing methods.

Quantum computers operate on completely different principles to existing computers, which makes them really well suited to solving particular mathematical problems, like finding very large prime numbers. Since prime numbers are so important in cryptography, its likely that quantum computers would quickly be able to crack many of the systems that keep our online information secure. Because of these risks, researchers are already trying to develop technology that is resistant to quantum hacking, and on the flipside of that, its possible that quantum-based cryptographic systems would be much more secure than their conventional analogues.

Researchers are also excited about the prospect of using quantum computers to model complicated chemical reactions, a task that conventional supercomputers arent very good at all. In July 2016, Google engineers used a quantum device to simulate a hydrogen molecule for the first time, and since them IBM has managed to model the behaviour of even more complex molecules. Eventually, researchers hope theyll be able to use quantum simulations to design entirely new molecules for use in medicine. But the holy grail for quantum chemists is to be able to model the Haber-Bosch process a way of artificially producing ammonia that is still relatively inefficient. Researchers are hoping that if they can use quantum mechanics to work out whats going on inside that reaction, they could discover new ways to make the process much more efficient.

Read the rest here:

What are quantum computers and how do they work? WIRED ...

There Could be a New Crypto-King: Ethereum – Roger Ver …

A very well known name and spark-giving to debates Roger Ver, aiming at crypto-enthusiasts and the community, mentioned that Ethereum could be overtaking Bitcoins throne by market capitalization and leading the board of coins.

Being one of the foremost investors in BTC when it launched almost 9 years ago, Roger Ver made a name for himself and a lot of money in the past years. Also, very supportive of BCH [Bitcoin Cash] that is a hard fork of Bitcoin and has been subject of manyword-exchangesby analysts and individuals.

If a trader wants to reach maximum impact the concept of diversification is need, according to Ver. He believes that by that, BTC will not stand above all others anymore very soon as Ethereum will be the next big thing to lead.

In a way, he is right, as the recent surge in pricing of Ether has brought only positivity to the currency. Manyinnovationsare also coming to the Ethereum platform, as are various network improvements to bolster its reliability.

Ver believes that it takes only one major push and a doubling of price one more time in order for Ethereum to cross the valuation of Bitcoin. But on the other hand, even Bitcoin is not stagnant, and it will be interesting to see the rivalry between Ethereum and Bitcoin in the upcoming times.

Despite being second in position by market cap, Roger Ver claimed that Ethereum did fly past Bitcoin in many other aspects as are cheaper to use and conduct transactions, that is very contrary to Bitcoin.

The Ethereum developers are quite more friendly and adaptive than Bitcoins developer as stated by Ver. If you put everything together, it will give birth to a feeling of challenge, which will turn out strong, by Ethereum to Bitcoin for overtaking the throne making it even more exciting to see what the future holds for the crypto-ecosystem.

Here is the original post:

There Could be a New Crypto-King: Ethereum - Roger Ver ...

voluntaryist.com – Roger Ver

November 12th, 2012

My road to becoming a voluntaryist began in junior high when I found a copy of the book SOCIALISM by Ludwig von Mises. At the time I hadnt given politics much thought and was a typical statist who assumed that there wasnt any reason to limit the States power if it was being used to help people, but I also had a vague idea that Americans were opposed to solicalism.

When I initially started reading SOCIALISM I thought it would be a pro-socialist book, but that it would be a good idea for me to hear the other side of the argument. By the time I finished it, I had learned that it is an impossibility for the government to centrally plan an economy as efficiently as the free market. After this book, I was inspired to read other books on economics by Ludwig von Mises, Adam Smith, Fredric Bastiat, Leonard Read, Henry Hazlitt, Friedrich Hayek, Milton Friedman, and just about anything else I could order from Laissez-Faire Books, since this was before the internet was wide-spread. I learned that prices transmit the information required to most effectively allocate resources and that government intervention in the economy is preventing the world from being as wealthy as it should be. The more I read, the more appalled I became at the economic ignorance displayed by politicians and governments around the world. I became frustrated because anyone who spends the time to study economics can learn that nearly everything the government does makes the world a poorer place and that people, especially the poor, would be much better off if everyone were simply allowed to do anything that is peaceful.

At this point I had a firm grasp of the economic benefits brought to all by the free market, but it wasnt until I found Murray Rothbards works that I started to think about the moral case for freedom. I devoured all of Rothbards books and was persuaded by the logic of his arguments. I remember being almost afraid to read such powerful truths. In all my years of schooling, no one before Rothbard had ever pointed out that taxation is the moral equivalent of theft, and the military draft is the moral equivalent of kidnapping and slavery. It shattered my remaining hopes that the State could be morally justified. For the first time I saw them for the criminal band of thieves, slave masters, and murderers that they are. My life has never been the same since.

Up to this point everything I had learned seemed ideological and somewhat abstract, but I felt the need to point out these truths to others. To help spread the ideas of liberty at the age of twenty, in the year 2000, I became a Libertarian candidate for California State Assembly. I vowed that if I were elected I would not accept any salary, considering the money would necessarily have been taken from others by force in the form of taxation. I also promised to cut as many taxes and repeal as many laws as I could.

As part of the election process I was invited to participate in a debate at San Jose State University against the Republican and Democrat candidates. In the debate, I argued that taxation is theft, the war on drugs is immoral, and that the ATF are a bunch of jack booted thugs and murderers in memoriam to the people they slaughtered in Waco, Texas. Unbeknownst to me at the time there were several plain clothed ATF agents in the audience who became very upset with the things I was saying. They began looking into my background in the attempt to find dirt on me. I had already started a successful online business selling various computer components. In addition to computer parts, I, along with dozens of other resellers across the country, including Cabelas, were selling a product called a Pest Control Report 2000. It was basically a firecracker used by farmers to scare deer and birds away from their corn fields. While everyone else, including the manufacturer, were simply asked to stop selling them I became the only person in the nation to be prosecuted.

The reasoning for the prosecution became crystal clear after a meeting with the US prosecuting attorney and the under cover ATF agents from the debate. In the meeting, my attorney told the prosecutor that selling store-bought firecrackers on Ebay isnt a big deal and that we can pay a fine and do some community service to be done with everything. When the prosecutor agreed that that sounded reasonable one of the ATF agents pounded his hand on the table and shouted but you didnt hear the things that he said! This summed up very clearly that they were angry about the things that I had said, not the things that I had done.

After being told by the US attorney that I would be sent to jail for seven or eight years if I took my case to trial I signed a plea agreement. At the sentencing the judge asked me if anyone threatened or coerced me in any way to sign the plea agreement. When I said Yes, absolutely, the judges eyes became very wide and he asked what do you mean? I explained that the US attorney told me that he would send me to jail for seven or eight years if I didnt sign the plea agreement. The judge responded that that was not what he was asking about, so I replied that I must not understand what it means to be threatened or coerced. The judge then proceeded to lecture me extensively on politics. He carried on about why government is so important and how taxes are the price we pay for a civilized society and that government is wonderful in general. He summed up his lecture by telling me that, I dont want you to think that your political views have anything to do with why you are here today, and then sentenced me to serve ten months in federal prison.

After my release from Lompoc Federal Penitentiary I had to deal with three years of lies, insults, threats, and general harassment by the US Federal probation department. I moved to Japan on the very day my probation finished.

Currently, I am working full time to make the world a better, less violent place by promoting the use of Bitcoin. Bitcoin totally strips away the States control over money. It takes away the vast majority of its power to tax, regulate, or control the economy in any way. If you care about liberty, the nonaggression principle, or economic freedom in general you should do everything you can to use Bitcoin as often as possible in your daily life.

[Roger Ver was born and raised in Silicon Valley and now resides in Tokyo. He is the CEO of MemoryDealers.com and directly employs thirty people in several countries around the world. Roger is also an investor in numerous Bitcoin startups. He spends his free time studying economics, moral philosophy, Bitcoin, and Brazilian Jujitsu.This article first appeared on the website http://www.daily dailyanarchist.com on November 12, 2012.]

See the rest here:

voluntaryist.com - Roger Ver