WebMD Drugs & Medications – Medical information on …

IMPORTANT: About This Section and Other User-Generated Content on WebMD

The opinions expressed in WebMD User-generated content areas like communities, reviews, ratings, or blogs are solely those of the User, who may or may not have medical or scientific training. These opinions do not represent the opinions of WebMD. User-generated content areas are not reviewed by a WebMD physician or any member of the WebMD editorial staff for accuracy, balance, objectivity, or any other reason except for compliance with our Terms and Conditions. Some of these opinions may contain information about treatment or uses of drug products that have not been approved by the U.S. Food and Drug Administration. WebMD does not endorse any specific product, service, or treatment.

Do not consider WebMD User-generated content as medical advice. Never delay or disregard seeking professional medical advice from your doctor or other qualified healthcare provider because of something you have read on WebMD. You should always speak with your doctor before you start, stop, or change any prescribed part of your care plan or treatment. WebMD understands that reading individual, real-life experiences can be a helpful resource but it is never a substitute for professional medical advice, diagnosis, or treatment from a qualified health care provider. If you think you may have a medical emergency, call your doctor or dial 911 immediately.

Link:

WebMD Drugs & Medications - Medical information on ...

Universal Basic Income from a libertarian perspective – A …

In this article I'm going to consider Universal Basic Income (UBI) from a libertarian perspective, focusing mainly on analysis of the labour market, rather than the much more common libertarian "small state" argument in favour of UBI.

The crux of the article

The current labour market is terribly unfree as it is because it relies on coercion, workfare, sanctions, draconian anti-labour legislation etc.

The introduction of Universal Basic Income would would create a much freer labour market (no more threat of destitution, sanctions or forced labour schemes, and much freer labour contracts between employers and employees), but this increased freedom for the majority would come at the expense of necessary measures in order to control inflation (which would rapidly destroy the project if left unchecked).

The reduction in aggression against the majority of workers would outweigh the infringements on the current rights that rentiers have to exploit access to basic commodities in order to extract profit for themselves (which it can be argued is another form of aggression against the majority anyway).

What is libertarianism?

The origins of libertarianism can be traced to the 18th and 19th Century anarchist and and socialist movements in Europe, however it was quickly embraced and integrated into

One of the most famous left-libertarians was the American Henry George (1839-1897), who opposed rentierism, and argued in favour of Land Value Tax. Many Georgists have argued that the proceeds from Land Value Tax should be used to fund a citizens income, or Universal Basic Income.

Left-Libertarianism is not as famous as its rabid Ayn Rand inspired American cousin, but it is an increasingly popular political stance, and one which I personally embrace.

What is Universal Basic Income [Main article]

If you're not fully versed on what Universal Basic Income (UBI) is, I suggest that you read my introductory article before coming back to finish this one. If you haven't got time for that, or you are reasonably clued up about what UBI is, I'll just provide a short summary.

UBI is an unconditional payment that is made to every qualifying individual within an economy. There is no means testing at all, other than determination that the individual is eligible (a citizen in the economy for example). Ideally the UBI is set at a rate which is sufficient to ensure that all recipients have access to basic human necessities (a home, sufficient food and water, basic energy needs ...).

This concept is generally appealing to libertarians on a basic level because it dispenses with almost all forms of state means testing, meaning a smaller, and less obtrusive state. In this article I'm not going to focus on this compelling "smaller state" argument for UBI, in favour of considering the libertarian case for UBI from a labour market perspective.

What makes the current labour market so unfree?

Labour is a fundamentally important factor in any economy. Orthodox economic theories tends to treat labour as if it is just some other kind of basic commodity, however, if it is to be referred to as a commodity at all, it must be recognised as a very special and distinct form of commodity, one that can be created at will, and which takes myriad potential forms.

The neoclassical orthodoxy fails to treat the labour market as utterly different to other commodities markets and it also fails to recognise the unequal nature of the market in labour, where the employer at a huge advantage over the employee. There are innumerable factors that put the buyer at an advantage of the seller in the labour market, but perhaps the most significant is the creation of false abundance via political policies aimed at retaining a constant pool of unemployment, the "reserve army of unemployment" as Marx defined it in the 19th century, or the "price worth paying" as it was described by former Tory Chancellor Norman Lamont in 1991.

In 1918 Bertrand Russell argued against this inequality in the labour market, proposing a kind of basic income so "the dread of unemployment and loss of livelihood will no longer haunt men like a nightmare".

The constant threat of destitution is a powerful means by which employers can drive down wages and working conditions, putting them at an unfair price advantage over the worker. If the scale of unemployment has been brought about via deliberate economic policies based on the equilibrium rate of unemployment, this is a clear case of the state trampling all over the libertarian non-aggression principle. If government policies result in your labour being coerced from you at a lower rate than you would be willing to sell it, solely because you fear destitution if you don't work for low wages, you're suffering aggression at the hands of the state. The spectre of unemployment and impoverishment created by economic policies aimed at maintaining "extra capacity" in the labour market is not the only current example of aggressive coercion in the labour market.

Workfare blatantly violates the libertarian non-aggression principle [Main article]

One of the starkest examples of a labour policy which violate the libertarian non-aggression principle is the kind of mandatory unpaid labour schemes for the unemployed collectively termed "Workfare".

These schemes coerce the unemployed, under threat of absolute destitution, into giving up their labour for free, often to highly profitable corporations.

It's bad enough that the state uses the threat of destitution (via welfare sanctions) to undermine the aggregate value of labour, but that ministers of the government openly declare that they believe that the state has "a right" to extract the labour of the individual for no wage at all, demonstrates an extremely illiberal attitude towards the labour rights of the individual.

These mandatory unpaid "Workfare" labour schemes demonstrate beyond doubt that the ministers involved in administering these schemes believe that the labour of the individual actually belongs to the state. If your government acts as if it believes that your labour is a commodity which belongs to the state, and which can be extracted and distributed for free to favoured corporations, the labour market isn't just unfree, it is grotesquely authoritarian. How would UBI make the labour market freer?

If every individual received an unconditional basic income sufficient to meet their fundamental human needs (housing, food and water, energy, health care ...) the threat of destitution would cease to necessitate people into accepting wages and working conditions they deem unfair.

An unconditional basic income would also render totally unworkable the draconian regime of "Workfare" labour extraction schemes enforced via draconian welfare sanctions regimes. If the individual has a right to an unconditional subsistence income, the state loses the power to coerce and intimidate the individual into giving up their labour for free with threats of destitution, starvation and homelessness.

Even if we accept the wrong-headed idea that labour is a basic commodity with a defined value (the national minimum wage for example), we have to accept that coerced unpaid labour represents theft, and a clear violation of the libertarian non-aggression principle. Universal Basic Income would render this form of theft by the state totally unworkable, because the state would have no right to revoke the unconditional incomes of those that won't comply with their unpaid labour extraction schemes.

How a freer labour market could benefit society and the economy

I've explained a how UBI could benefit society and the economy in the primer article on the subject, so I'll try to be concise here.

The free labour market that UBI would create if administered correctly, would benefit society by alleviating extreme poverty, which would lead to a fall in poverty related social problems such as crime and poverty related ill-health.

Another benefit to society would be that the existence of UBI would push up the cost of employing people to do undesirable jobs (disgusting, dangerous or debilitating work), meaning that in turn there would be much greater financial incentives for companies to invest in technology to automate such work. The development of technology to eliminate undesirable jobs would benefit society and the economy (fewer people working in undesirable jobs, greater demand for high-tech solutions).

UBI trials have shown that people generally don't stop working and laze about once their basic necessities are provided, in fact UBI works as an economic stimulus, because people have more time to invest in starting their own businesses, and the public has more money to spend on consumption. The only demographics to substantially reduce the hours they work are mothers with young children and young people in education, it is arguable that these reductions are actually beneficial in socio-economic terms.

Why is controlling inflation so important?

Controlling price inflation would be absolutely crucial to the success of any Universal Basic Income project because without measures to stop the inflation of basic necessities (rent, utilities, food ...) the gains that UBI would provide would soon be eroded away as price rises diminish the value of the basic income payment so that it is no longer sufficient to cover the basic costs of subsistence.

If inflation is allowed to run rampant, the benefits of Universal Basic Income would soon be transferred from the ordinary citizen that receives it, to the rentiers that take advantage by hiking the prices they charge for the provision of basic commodities and services.

Controlling Rentierism

If the rentiers are allowed free rein to profiteer from basic income provision, they will simply inflate their prices in order to soak up the entire value of basic income to cover the cost of some necessity of life (rent, transport, childcare, energy consumption). If the parasitic behaviour of rentiers is not controlled, all of the socio-economic benefits would soon be siphoned off as into the bank accounts of the most ruthlessly self-interested rent seekers. Essentially Universal Basic Income would turn into a government subsidisation scheme for the most ruthlessly self-interested, which is precisely the kind of system we have now, which is one of the main reasons people have been proposing the introduction of UBI in the first place.

The only practical way to stop this kind of rent seeking behaviour from destroying UBI would be to introduce some form of market regulation to prevent landlords, utilities companies, childcare providers and the like from massively inflating their prices in order to soak up the economic benefit of UBI for themselves.

There's no such thing as a perfectly free-market economy

Anyone that believes that there is such a thing as a perfectly free market is living in the same cloud-cuckoo land as those that believe a totally state controlled economy is a possibility.

What is up for debate is how more market freedom can be created. The orthodox neoliberal would argue that greater market freedom is produced through deregulation, but the huge growth in inequality, the ever increasing size of economic crises and the rise of vast "too big to fail" oligopolies since the neoliberal craze of privatisation and deregulation became the economic orthodoxy in 1980s, suggests that they are wrong. Deregulation and privatisation have increased the freedoms of corporations and the super-rich at the expense of the majority, who have seen their share of national incomes eroded away dramatically since the late 1970s despite rising productivity.

Others might argue that the best way to stimulate market freedom is through the creation of a "fair market", through carefully planned market regulation. Rules to prevent (and properly punish) anti-competitive practices such as price rigging, formation of oligopolies, monopolies and cartels, financial doping, insider trading, political patronage, front running, information asymmetry, dividing territories, corruption and outright fraud, would create a freer and safer market for individuals and small businesses, which would increase competition and efficiency, but at the cost of the freedoms of those that currently profit from the use of anti-competitive practices.

The same kind of debate can be had over the introduction of rules (rent caps, inflation controls on basic commodities and services ... ) to prevent the rentier class form extracting the benefit of Universal Basic Income for themselves. The infringement of their "right" to gouge as much profit as possible out of basic commodities and services, would have to be weighed against the greater economic freedoms afforded to the majority.

Essentially it boils down to the question of which is the most important; freeing up the currently unfree labour market or the continuation of free market in the provision of fundamental commodities and services?

Providing more freedom in which of these markets would create the biggest increase in aggregate freedom, and which would be most compliant with the libertarian non-aggression principle? In my view the answer is obvious. The freedom of the majority outweighs the freedom of the minority.

Other libertarian arguments for UBI aside from the labour market analysis

Before I conclude I'd like to state that this labour market analysis is far from the only libertarian argument for the introduction of Universal Basic Income.

Other arguments include the most common "small state" argument because universal welfare would reduce the size of the state by reducing the number of functions of the state. Another argument can be made that since there would be no means testing, UBI would provide greater freedom from intrusion by the state into the private lives of the individual.

Perhaps the most compelling libertarian argument in favour of Universal Basic Income is that perhaps freedom from destitution in itself is the most important liberty, because without freedom from destitution the individual is often left facing either the suffering of destitution, or the suffering of wage slavery.

Conclusion

Labour is a fundamental element of any economy (be it capitalist, state socialist or anywhere in between). and an unfree market in labour is fundamentally incompatible with libertarianism.

If the deliberate economic policies of the political establishment in your country mean that your labour can be coerced from you at a lower rate than you would be willing to sell, simply because of the threat of absolute destitution, this is clearly an act of aggression on the part of the establishment.

If your government acts as if it believes that your labour is a commodity which actually belongs to the state, and can be extracted from you for no recompense at all, this is an even more vile example of state aggression.

The introduction of Universal Basic Income would put an end to both of these forms of labour market aggression, but in order for it to work measures to prevent rentiers from profiteering by inflating the prices they charge for basic human necessities would need to be introduced. Thus the debate is not over whether UBI is compatible with libertarianism (it clearly is) but whether the benefits from the greater freedoms in the labour market would outweigh the necessary losses in freedom of rentiers to profiteer from the provision of basic human needs, which would be necessary in order to prevent the whole project collapsing into inflationary chaos.

In my view the freedoms of the majority should outweigh the freedoms of the minority, and in any case, the current freedom to profiteer from the provision of basic human necessities that the rentier class enjoy can actually be viewed as a form of aggression in its own right. Why should the profits of the minority take precedence over the basic human needs of the majority?

Read more here:

Universal Basic Income from a libertarian perspective - A ...

The Condition of Transgender Women … – Libertarianism.org

May 12, 2015 columns

Libertarians should oppose the states victimization of transgender people and help build a society safe for a diverse range of gender identities, argues Novak.

On all reasonable accounts, libertarianism should greatly appeal to transgender women.

Most fundamentally, libertarianism represents a set of philosophical dispositions firmly grounded in affirming the primacy of individual liberties. In his recent book, The Libertarian Mind, David Boaz powerfully describes the broad parameters of libertarian adherence to the freedom of the individual human being in the following way:

the basic unit of social analysis is the individual. Its hard to imagine how it could be anything else. Individuals are, in all cases, the source and foundation of creativity, activity, and society. Only individuals can think, love, pursue projects, act. Groups dont have plans or intentions. Only individuals are capable of choice, in the sense of anticipating the outcomes of alternative courses of action and weighing the consequences. Individuals, of course, often create and deliberate in groups, but it is the individual mind that ultimately makes choices. Most important, only individuals can take responsibility for their actions.

Irrespective of whether the key argumentative basis for individualism stresses selfownership of body, mind, and soul (Locke), or the virtues of diversity and flourishing associated with the development of the person (von Humboldt and Mill), libertarian philosophy should easily accommodate the aspirations and prerogatives of transgender women, and all other people subscribing to diverse gender identities, in seeking to live their lives as they see fit.

Further, libertarian acceptance for transwomen, transmen, and genderqueer people and, indeed, cisgender people (readers unfamiliar with the meaning of these, and similar, terms depicting gender diversities may wish to read this glossary) is not contingent upon whether there are biological or nonbiological bases of gender identity.

Respect for transwomen, and for others who wish to self identify and express diverse gender identities in numerous ways, should also not be contingent upon the numerical strength of varied groupings within society. Given the stigma attached to gender diversity, there remain limitations in our understandings of the exact numbers of transgender people; however, some surveys suggest that less than one per cent of the American adult population identify as transgender.

To put it simply, each and every individual should be free to choose, to act, and to be, regardless of reason or of numbers, for as long as the equal freedom of others to do the same is respected.

Aside from celebrating individual liberties, libertarianism ought to be more appealing to transwomen, and everyone for that matter, because of its principled antipathy, both in historical and contemporary terms, toward the exhaustion of individual freedoms by the state. Indeed, for a very long time, and certainly to this day, governments have demonstrated overt hostility towards transgender people, seeking to undermine their interests in pursuing their own lives in a dignified manner.

Attention has been increasingly drawn to the often highly detrimental effects of the policeprison industrial complex upon minority groupings, including transgender and other gender diverse people. In a recent contribution, Nathan Goodman noted the elevated levels of violence against incarcerated transgender people, particularly transwomen, arising from prison policies that house transwomen with cisgender men, a harmful practice compounded by instances of sexual abuse and physical violence perpetrated against inmates by corrections staff.

The treatment of transwoman Chelsea Manning, sentenced to be held captive by the state for 35 years on account of whistleblowing about US war crimes, is a case in point. Although her statecaptors recently afforded Manning hormonal treatment, they had denied her the appropriate medications for years in an obvious act of psychological torment. Chelsea Manning still remains incarcerated, in the presence of male prisoners, in spite of her selfidentification as a woman.

A disproportionate lack of access to formal labour markets, often as a result of discriminatory treatment by employers, can often lead transgender people into the forced situation (rather than heroic, antistatist choice lauded by some libertarians) of attaining incomes through the shadow economy. Ongoing state detection of activities, such as the provision of prostitution services and the sale of illicit drugs, where these are not legalised, can fairly readily bring forth instances in which transwomen come into contact with police and other lawenforcement agents, with the harassment, intimidation, and violence this all too often entails.

Transwomen have even been victimised by police profiling, as was the case for sex worker advocate Monica Jones, who was charged and found guilty of manifesting prostitution, or as it has infamously become known walking while trans, during an antiprostitution sting in Phoenix, Arizona. Incidentally, Ms Jones was deported from Australia, and subjected to sensationalist media coverage, on account of her Phoenix conviction which was later overturned on appeal.

In many countries around the world, including the United States, political institutions continue to suppress diverse gender identities by refusing to enable individuals to easily alter gender markers on identity documents. Altering gender markers (that is, conventionally, male or female) on official documentation is, for the largest part, entirely conditional on people having undertaken invasive, typically irreversible and almost always expensive gender affirmation surgical processes, or at least hormone therapies with equally significant physiological implications.

The reality is that for many transwomen, at least at a given point in their lifetimes, gender markers on government identification documents are inconsistent with the lived gender under which they undertake their daily routines and responsibilities, and this can give rise to unwarranted economic and social discrimination and exclusion. For example, employers usually require job applicants to furnish governmentprovided documents as proof of identity, and there is much anecdotal evidence suggesting they are likely to turn away prospective transgender employees when identity documents display gender markers appear not to accord with the everyday lived experiences (including presentation) of the applicant. As discussed by Dean Spade, the refusal of governments to enable individuals to easily alter gender markers on identity documents rests on the myth that transgender people do not exist. When ID issuing agencies refuse to change the gender marker on an ID, they are operating on the idea that birth-assigned gender should be permanent and no accommodation is necessary for those for whom such an assignment does not match their lived experience of gender.

These and other policies enacted, and enforced, by governments malevolently fit together to violate the liberties and rights of transwomen and other genderdiverse individuals, just like many other regulatory, and fiscal, policies are prone to do. And it is naive to conceive that certain legislative edicts purportedly designed to defend the interests of transgender people, and gays, lesbians, bisexuals, or intersex people, for that matter, such as antidiscriminationor hate crime laws, do much to greatly foster greater acceptance, respect, and tolerance for minorities.

Statist prejudice against transwomen in particular reinforces, and is reinforced by, complex and widespread forms of nonstate discrimination, harassment, and violence. Decentralised efforts to maintain gender conformism, perpetrated by vigilante acts by individuals or groups, underlined by derogatory stereotypes of gender variance in popular film, literature, and music, induce among transwomen and genderdiverse people limitations of movement, social isolation, the delaying or deterrence of gender selfexpression, and, at its worst, can lead to vulnerable, often young, people ending their own lives.

From a philosophical standpoint which behoves interactions among individuals imbibing the spirit of anything thats peaceful, libertarians can, and indeed ought to, play a very important role in rebuking the misguided and highly damaging acts by cisgender supremacists attempting to prevent individuals identifying and expressing their diverse gender identities. Redressing nonstate sources of transphobia through instances of bottomup social activism, including appealing to the common humanity that transwomen share with other people, would represent a befitting way to respond to the Leelah Alcorns plea in her suicide note to fix society.

Indeed, we should fervently celebrate the emergent social order which arises when transwomen, transmen, genderqueer, and other genderdiverse people free themselves from conventions and norms most amenable to cisgender existences, as Nick Cowen explained:

Polycentric orders offer choice: whether identifying as straight, gay, male, female or anything else. In this context, queer individuals take the role of social entrepreneurs, combining ways of living in new ways. The more successful or aesthetically-engaging lifestyles are further developed by others. Popular identities remain common but are not enforced through violence or legislation. Alternatives to existing sexualities are allowed to flourish. People are not bound by one abstract order, statutorily enforced, but are allowed to develop new orders that use and display our personalities in different ways.

As amply illustrated through its long and distinguished history, the philosophy of libertarianism represents a broader cast of mind seeking to enhance the life of each individual person, and to extend to them maximum respect for their dignity, freedom, and individuality. Clearly, this must incorporate the dignity, freedom, and individuality inherent in the ways in which people identity with, and express, their gender identity, if libertarianism is to maintain relevance and meaning to the lives of each and every human being.

Mikayla Novak contributes to a group column on libertarian feminism along with Sharon Presley, Elizabeth Nolan Brown, and Helen Dale.

Novak is a Senior Fellow at the Institute of Public Affairs, an Australian freemarket think tank, and has a PhD in economics. She is interested in how libertarian feminism concerns relate to how market processes and civil societal actions satisfactorily accommodate individual womens preferences, in a variety of ways.

Follow this link:

The Condition of Transgender Women ... - Libertarianism.org

Artificial Intelligence – Minds & Machines Home

Stanford Encyclopedia of Philosophy A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Artificial intelligence (AI) is the field devoted to building artificial animals (or at least artificial creatures that -- in suitable contexts -- appear to be animals) and, for many, artificial persons (or at least artificial creatures that -- in suitable contexts -- appear to be persons). Such goals immediately ensure that AI is a discipline of considerable interest to many philosophers, and this has been confirmed (e.g.) by the energetic attempt, on the part of numerous philosophers, to show that these goals are in fact un/attainable. On the constructive side, many of the core formalisms and techniques used in AI come out of, and are indeed still much used and refined in, philosophy: first-order logic, intensional logics suitable for the modeling of doxastic attitudes and deontic reasoning, inductive logic, probability theory and probabilistic reasoning, practical reasoning and planning, and so on. In light of this, some philosophers conduct AI research and development as philosophy.

In the present entry, the history of AI is briefly recounted, proposed definitions of the field are discussed, and an overview of the field is provided. In addition, both philosophical AI (AI pursued as and out of philosophy) and philosophy of AI are discussed, via examples of both. The entry ends with some speculative commentary regarding the future of AI.

The field of artificial intelligence (AI) officially started in 1956, launched by a small but now-famous DARPA-sponsored summer conference at Dartmouth College, in Hanover, New Hampshire. (The 50-year celebration of this conference, AI@50, was held in July 2006 at Dartmouth, with five of the original participants making it back. What happened at this historic conference figures in the final section of this entry.) Ten thinkers attended, including John McCarthy (who was working at Dartmouth in 1956), Claude Shannon, Marvin Minsky, Arthur Samuel, Trenchard Moore (apparently the lone note-taker at the original conference), Ray Solomonoff, Oliver Selfridge, Allen Newell, and Herbert Simon. From where we stand now, at the start of the new millennium, the Dartmouth conference is memorable for many reasons, including this pair: one, the term artificial intelligence was coined there (and has long been firmly entrenched, despite being disliked by some of the attendees, e.g., Moore); two, Newell and Simon revealed a program -- Logic Theorst (LT) -- agreed by the attendees (and, indeed, by nearly all those who learned of and about it soon after the conference) to be a remarkable achievement. LT was capable of proving elementary theorems in the propositional calculus.[1]

Though the term artificial intelligence made its advent at the 1956 conference, certainly the field of AI was in operation well before 1956. For example, in a famous Mind paper of 1950, Alan Turing argues that the question Can a machine think? (and here Turing is talking about standard computing machines: machines capable of computing only functions from the natural numbers (or pairs, triples, ... thereof) to the natural numbers that a Turing machine or equivalent can handle) should be replaced with the question Can a machine be linguistically indistinguishable from a human?. Specifically, he proposes a test, the Turing Test (TT) as it's now known. In the TT, a woman and a computer are sequestered in sealed rooms, and a human judge, in the dark as to which of the two rooms contains which contestant, asks questions by email (actually, by teletype, to use the original term) of the two. If, on the strength of returned answers, the judge can do no better than 50/50 when delivering a verdict as to which room houses which player, we say that the computer in question has passed the TT. Passing in this sense operationalizes linguistic indistinguishability. Later, we shall discuss the role that TT has played, and indeed coninues to play, in attempts to define AI. At the moment, though, the point is that in his paper, Turing explicitly lays down the call for building machines that would provide an existence proof of an affirmative answer to his question. The call even includes a suggestion for how such construction should proceed. (He suggests that child machines be built, and that these machines could then gradually grow up on their own to learn to communicate in natural language at the level of adult humans. This suggestion has arguably been followed by Rodney Brooks and the philosopher Daniel Dennett in the Cog Project: (Dennett 1994). In addition, the Spielberg/Kubrick movie A.I. is at least in part a cinematic exploration of Turing's suggestion.) The TT continues to be at the heart of AI and discussions of its foundations, as confirmed by the appearance of (Moor 2003). In fact, the TT continues to be used to define the field, as in Nilsson's (1998) position, expressed in his textbook for the field, that AI simply is the field devoted to building an artifact able to negotiate this test.

Returning to the issue of the historical record, even if one bolsters the claim that AI started at the 1956 conference by adding the proviso that artificial intelligence refers to a nuts-and-bolts engineering pursuit (in which case Turing's philosphical discussion, despite calls for a child machine, wouldnt exactly count as AI per se), one must confront the fact that Turing, and indeed many predecessors, did attempt to build intelligent artifacts. In Turing's case, such building was surprisingly well-understood before the advent of programmable computers: Turing wrote a program for playing chess before there were computers to run such programs on, by slavishly following the code himself. He did this well before 1950, and long before Newell (1973) gave thought in print to the possibility of a sustained, serious attempt at building a good chess-playing computer.[2]

From the standpoint of philosophy, neither the 1956 conference, nor Turing's Mind paper, come close to marking the start of AI. This is easy enough to see. For example, Descartes proposed TT (not the TT by name, of course) long before Turing was born.[3] Here's the relevant passage:

At the moment, Descartes is certainly carrying the day.[4] Turing predicted that his test would be passed by 2000, but the fireworks-across-the-globe start of the new millennium has long since died down, and the most articulate of computers still can't meaningfully debate a sharp toddler. Moreover, while in certain focussed areas machines out-perform minds (IBM's famous Deep Blue prevailed in chess over Gary Kasparov, e.g.), minds have a (Cartesian) capacity for cultivating their expertise in virtually any sphere. (If it were announced to Deep Blue, or any current successor, that chess was no longer to be the game of choice, but rather a heretofore unplayed variant of chess, the machine would be trounced by human children of average intelligence having no chess expertise.) AI simply hasn't managed to create general intelligence; it hasn't even managed to produce an artifact indicating that eventually it will create such a thing.

But what if we consider the history of AI not from the standpoint of philosophy, but rather from the standpoint of the field with which, today, it is most closely connected? The reference here is to computer science. From this standpoint, does AI run back to well before Turing? Interestingly enough, the results are the same: we find that AI runs deep into the past, and has always had philosophy in its veins. This is true for the simple reason that computer science grew out of logic and probability theory, which in turn grew out of (and is still intertwined with) philosophy. Computer science, today, is shot through and through with logic; the two fields cannot be separated. This phenomenon has become an object of study unto itself (Halpern et al. 2001). The situation is no different when we are talking not about traditional logic, but rather about probabilistic formalisms, also a significant component of modern-day AI: These formalisms also grew out of philosophy, as nicely chronicled, in part, by Glymour (1992). For example, in the one mind of Pascal was born a method of rigorously calculating probabilities, conditional probability that plays a large role in AI to this day, and such fertile philosophico-probabilistic arguments as Pascal's wager, according to which it is irrational not to become a Christian.

That modern-day AI has its roots in philosophy, and in fact that these historical roots are temporally deeper than even Descartes distant day, can be seen by looking to the clever, revealing cover of the comprehensive textbook Artificial Intelligence: A Modern Approach (known in the AI community as simply AIMA for (Russell & Norvig 2002)).

What you see there is an eclectic collection of memorabilia that might be on and around the desk of some imaginary AI researcher. For example, if you look carefully, you will specifically see: a picture of Turing, a view of Big Ben through a window (perhaps R&N are aware of the fact that Turing famously held at one point that a physical machine with the power of a universal Turing machine is physically impossible: he quipped that it would have to be the size of Big Ben), a planning algorithm described in Aristotle's De Motu Animalium, Frege's fascinating notation for first-order logic, a glimpse of Lewis Carrolls (1958) pictorial representation of syllogistic reasoning, Ramon Lulls concept-generating wheel from his 13th-century Ars Magna, and a number of other pregnant items (including, in a clever, recursive, and bordering-on-self-congratulatory touch, a copy of AIMA itself). Though there is insufficient space here to make all the historical connections, we can safely infer from the appearance of these items that AI is indeed very, very old. Even those who insist that AI is at least in part an artifact-building enterprise must concede that, in light of these objects, AI is ancient, for it isnt just theorizing from the perspective that intelligence is at bottom computational that runs back into the remote past of human history: Lulls wheel, for example, marks an attempt to capture intelligence not only in computation, but in a physical artifact that embodies that computation.

One final point about the history of AI seems worth making.

It is generally assumed that the birth of modern-day AI in the 1950s came in large part because of and through the advent of the modern high-speed digital computer. This assumption accords with common-sense. After all, AI (and, for that matter, to some degree its cousin, cognitive science, particularly computational cognitive modeling, the sub-field of cognitive science devoted to producing computational simulations of human cognition) is aimed at implementing intelligence in a computer, and it stands to reason that such a goal would be inseparably linked with the advent of such devices. However, this is only part of the story: the part that reaches back but to Turing and others (e.g., von Neuman) responsible for the first electronic computers. The other part is that, as already mentioned, AI has a particularly strong tie, historically speaking, to reasoning (logic-based and, in the need to deal with uncertainty, probabilistic reasoning). In this story, nicely told by Glymour (1992), a search for an answer to the question What is a proof? eventually led to an answer based on Freges version of first-order logic (FOL): a mathematical proof consists in a series of step-by-step inferences from one formula of first-order logic to the next. The obvious extension of this answer (and it isnt a complete answer, given that lots of classical mathematics, despite conventional wisdom, clearly cant be expressed in FOL; even the Peano Axioms require SOL) is to say that not only mathematical thinking, but thinking, period, can be expressed in FOL. (This extension was entertained by many logicians long before the start of information-processing psychology and cognitive science -- a fact some cognitive psychologists and cognitive scientists often seem to forget.) Today, logic-based AI is only part of AI, but the point is that this part still lives (with help from logics much more powerful, but much more complicated, than FOL), and it can be traced all the way back to Aristotle's theory of the syllogism. In the case of uncertain reasoning, the question isnt What is a proof?, but rather questions such as What is it rational to believe, in light of certain observations and probabilities? This is a question posed and tackled before the arrival of digital computers.

So far we have been proceeding as if we have a firm grasp of AI. But what exactly is AI? Philosophers arguably know better than anyone that defining disciplines can be well nigh impossible. What is physics? What is biology? What, for that matter, is philosophy? These are remarkably difficult, maybe even eternally unanswerable, questions. Perhaps the most we can manage here under obvious space constraints is to present in encapsulated form some proposed definitions of AI. We do include a glimpse of recent attempts to define AI in detailed, rigorous fashion.

Russell and Norvig (1995, 2002), in their aforementioned AIMA text, provide a set of possible answers to the What is AI? question that has considerable currency in the field itself. These answers all assume that AI should be defined in terms of its goals: a candidate definition thus has the form AI is the field that aims at building ... The answers all fall under a quartet of types placed along two dimensions. One dimension is whether the goal is to match human performance, or, instead, ideal rationality. The other dimension is whether the goal is to build systems that reason/think, or rather systems that act. The situation is summed up in this table:

Please note that this quartet of possibilities does reflect (at least a significant portion of) the relevant literature. For example, philosopher John Haugeland (1985) falls into the Human/Reasoning quadrant when he says that AI is The exciting new effort to make computers think ... machines with minds, in the full and literal sense. Luger and Stubblefield (1993) seem to fall into the Ideal/Act quadrant when they write: The branch of computer science that is concerned with the automation of intelligent behavior. The Human/Act position is occupied most prominently by Turing, whose test is passed only by those systems able to act sufficiently like a human. The thinking rationally position is defended (e.g.) by Winston (1992).

Its important to know that the contrast between the focus on systems that think/reason versus systems that act, while found, as we have seen, at the heart of AIMA, and at the heart of AI itself, should not be interpreted as implying that AI researchers view their work as falling all and only within one of these two compartments. Researchers who focus more or less exclusively on knowledge representation and reasoning, are also quite prepared to acknowledge that they are working on (what they take to be) a central component or capability within any one of a family of larger systems spanning the reason/act distinction. The clearest case may come from the work on planning -- an AI area traditionally making central use of representation and reasoning. For good or ill, much of this research is done in abstraction (in vitro, as opposed to in vivo), but the researchers involved certainly intend or at least hope that the results of their work can be embedded into systems that actually do things, such as, for example, execute the plans.

What about Russell and Norvig themselves? What is their answer to the What is AI? question? They are firmly in the the acting rationally camp. In fact, its safe to say both that they are the chief proponents of this answer, and that they have been remarkably successful evangelists. Their extremely influential AIMA can be viewed as a book-length defense and specification of the Ideal/Act category. We will look a bit later at how Russell and Norvig lay out all of AI in terms of intelligent agents, which are systems that act in accordance with various ideal standards for rationality. But first lets look a bit closer at the view of intelligence underlying the AIMA text. We can do so by turning to (Russell 1997). Here Russell recasts the What is AI? question as the question What is intelligence? (presumably under the assumption that we have a good grasp of what an artifact is), and then he identifies intelligence with rationality. More specifically, Russell sees AI as the field devoted to building intelligent agents, which are functions taking as input tuples of percepts from the external environment, and producing behavior (actions) on the basis of these percepts. Russells overall picture is this one:

Lets unpack this diagram a bit, and take a look, first, at the account of perfect rationality that can be derived from it. The behavior of the agent in the environment E (from a class E of environments) produces a sequence of states or snapshots of that environment. A performance measure U evaluates this sequence; notice the utility box in the previous figure. We let V(f,E,U) denote the expected utility according to U of the agent function f operating on E. Now we identify a perfectly rational agent with the agent function

Of course, as Russell points out, its usually not possible to actually build perfectly rational agents. For example, though its easy enough to specify an algorithm for playing invincible chess, its not feasible to implement this algorithm. What traditionally happens in AI is that programs that are -- to use Russells apt terminology -- calculatively rational are constructed instead: these are programs that, if executed infinitely fast, would result in perfectly rational behavior. In the case of chess, this would mean that we strive to write a program that runs an algorithm capable, in principle, of finding a flawless move, but we add features that truncate the search for this move in order to play within intervals of digestible duration.

Russell himself champions a new brand of intelligence/rationality for AI; he calls this brand bounded optimality. To understand Russells view, first we follow him in introducing a distinction: we say that agents have two components: a program, and a machine upon which the program runs. We write Agent(P,M) to denote the agent function implemented by program P running on machine M. Now, let (M) denote the set of all programs P that can run on machine M. The bounded optimal program Popt then is:

You can understand this equation in terms of any of the mathematical idealizations for standard computation. For example, machines can be identified with Turing machines minus instructions (i.e., TMs are here viewed architecturally only: as having tapes divided into squares upon which symbols can be written, read/write heads capable of moving up and down the tape to write and erase, and control units which are in one of a finite number of states at any time), and programs can be identified with instructions in the Turing machine model (telling the machine to write and erase symbols, depending upon what state the machine is in). So, if you are told that you must program within the constraints of a 22-state Turing machine, you could search for the best program given those constraints. In other words, you could strive to find the optimal program within the bounds of the 22-state architecture. Russells (1997) view is thus that AI is the field devoted to creating optimal programs for intelligent agents, under time and space constraints on the machines implementing these programs.[5]

It should be mentioned that there is a different, much more straightforward answer to the What is AI? question. This answer, which goes back to the days of the original Dartmouth conference, was expressed by, among others, Newell (1973), one of the grandfathers of modern-day AI (recall that he attended the 1956 conference); it is:

Though few are aware of this now, this answer was taken quite seriously for a while, and in fact underlied one of the most famous programs in the history of AI: the ANALOGY program of Evans (1968), which solved geometric analogy problems of a type seen in many intelligence tests. An attempt to rigorously define this forgotten form of AI (as what they dub Psychometric AI), and to resurrect it from the days of Newell and Evans, is provided by Bringsjord and Schimanski (2003). Recently, a sizable private investment has been made in the ongoing attempt, known as Project Halo, to build a digital Aristotle, in the form of a machine able to excel on standardized tests such at the AP exams tackled by US high school students (Friedland et al. 2004). In addition, researchers at Northwestern have forged a connection between AI and tests of mechanical ability (Klenk et al. 2005).

In the end, as is the case with any discipline, to really know precisely what that discipline is requires you to, at least to some degree, dive in and do, or at least dive in and read. Two decades ago such a dive was quite manageable. Today, because the content that has come to constitute AI has mushroomed, the dive (or at least the swim after it) is a bit more demanding. Before looking in more detail at the content that composes AI, we take a quick look at the explosive growth of AI.

First, a point of clarification. The growth of which we speak is not a shallow sort correlated with amount of funding provided for a given sub-field of AI. That kind of thing happens all the time in all fields, and can be triggered by entirely political and financial changes designed to grow certain areas, and diminish others. Rather, we are speaking of an explosion of deep content: new material which someone intending to be conversant with the field needs to know. Relative to other fields, the size of the explosion may or may not be unprecedented. (Though it should perhaps be noted that an analogous increase in philosophy would be marked by the development of entirely new formalisms for reasoning, reflected in the fact that, say, longstanding philosophy textbooks like Copis (2004) Introduction to Logic are dramatically rewritten and enlarged to include these formalisms, rather than remaining anchored to essentially immutable core formalisms, with incremental refinement around the edges through the years.) But it certainly appears to be quite remarkable, and is worth taking note of here, if for no other reason than that AIs near-future will revolve in significant part around whether or not the new content in question forms a foundation for new long-lived research and development that would not otherwise obtain.

Were you to have begun formal coursework in AI in 1985, your textbook would likely have been Eugene Charniak's comprehensive-at-the-time Introduction to Artificial Intelligence (Charniak & McDermott 1985). This book gives a strikingly unified presentation of AI -- as of the early 1980s. This unification is achieved via first-order logic (FOL), which runs throughout the book and binds things together. For example: In the chapter on computer vision (3), everyday objects like bowling balls are represented in FOL. In the chapter on parsing language (4), the meaning of words, phrases, and sentences are identified with corresponding formulae in FOL (e.g., they reduce the red block to FOL on page 229). In Chapter 6, Logic and Deduction, everything revolves around FOL and proofs therein (with an advanced section on nonmonotonic reasoning couched in FOL as well). And Chapter 8 is devoted to abduction and uncertainty, where once again FOL, not probability theory, is the foundation. Its clear that FOL renders (Charniak & McDermott 1985) esemplastic. Today, due to the explosion of content in AI, this kind of unification is no longer possible.

Though there is no need to get carried away in trying to quantify the explosion of AI content, it isn't hard to begin to do so for the inevitable skeptics. (Charniak & McDermott 1985) has 710 pages. The first edition of AIMA, published ten years later in 1995, has 932 pages, each with about 20% more words per page than C&M's book. The second edition of AIMA weighs in at a backpack-straining 1023 pages, with new chapters on probabilistic language processing, and uncertain temporal reasoning.

The explosion of AI content can also be seen topically. C&M cover nine highest-level topics, each in some way tied firmly to FOL implemented in (a dialect of) the programming language Lisp, and each (with the exception of Deduction, whose additional space testifies further to the centrality of FOL) covered in one chapter:

In AIMA the expansion is obvious. For example, Search is given three full chapters, and Learning is given four chapters. AIMA also includes coverage of topics not present in C&M's book; one example is robotics, which is given its own chapter in AIMA. In the second edition, as mentioned, there are two new chapters: one on constraint satisfaction that constitutes a lead-in to logic, and one on uncertain temporal reasoning that covers hidden Markov models, Kalman filters, and dynamic Bayesian networks. A lot of other additional material appears in new sections introduced into chapters seen in the first edition. For example, the second edition includes coverage of propositional logic as a bona fide framework for building significant intelligent agents. In the first edition, such logic is introduced mainly to facilitate the reader's understanding of full FOL.

One of the remarkable aspects of (Charniak & McDermott 1985) is this: The authors say the central dogma of AI is that What the brain does may be thought of at some level as a kind of computation (p. 6). And yet nowhere in the book is brain-like computation discussed. In fact, you will search the index in vain for the term neural and its variants. Please note that the authors are not to blame for this. A large part of AIs growth has come from formalisms, tools, and techniques that are, in some sense, brain-based, not logic-based. A recent paper that conveys the importance and maturity of neurocomputation is (Litt et al. 2006). (Growth has also come from a return of probabilistic techniques that had withered by the mid-70s and 80s. More about that momentarily, in the next resurgence section.)

One very prominent class of non-logicist formalism does make an explicit nod in the direction of the brain: viz., artificial neural networks (or as they are often simply called, neural networks, or even just neural nets). (The structure of neural networks is discussed below). Because Minsky and Pappert's (1969) Perceptrons led many (including, specifically, many sponsors of AI research and development) to conclude that neural networks didn't have sufficient information-processing power to model human cognition, the formalism was pretty much universally dropped from AI. However, Minsky and Pappert had only considered very limited neural networks. Connectionism, the view that intelligence consists not in symbolic processing, but rather non-symbolic processing at least somewhat like what we find in the brain (at least at the cellular level), approximated specifically by artificial neural networks, came roaring back in the early 1980s on the strength of more sophisticated forms of such networks, and soon the situation was (to use a metaphor introduced by John McCarthy) that of two horses in a race toward building truly intelligent agents.

If one had to pick a year at which connectionism was resurrected, it would certainly be 1986, the year Parallel Distributed Processing (Rumelhart & McClelland 1986) appeared in print. The rebirth of connectionism was specifically fueled by the back-propagation algorithm over neural networks, nicely covered in Chatper 20 of AIMA. The symbolicist/connectionist race led to a spate of lively debate in the literature (e.g., Smolensky 1988, Bringsjord 1991), and some AI engineers have explicitly championed a methodology marked by a rejection of knowledge representation and reasoning. For example, Rodney Brooks was such an engineer; he wrote the well-known Intelligence Without Representation (1991), and his Cog Project, to which we referred above, is arguably an incarnation of the premeditatedly non-logicist approach. Increasingly, however, those in the business of building sophisticated systems find that both logicist and more neurocomputational techniques are required (Wermter & Sun 2001).[6] In addition, the neurocomputational paradigm today includes connectionism only as a proper part, in light of the fact that some of those working on building intelligent systems strive to do so by engineering brain-based computation outside the neural network-based approach (e.g., Granger 2004a, 2004b).

There is a second dimension to the explosive growth of AI: the explosion in popularity of probabilistic methods that arent neurocomputational in nature, in order to formalize and mechanize a form of non-logicist reasoning in the face of uncertainty. Interestingly enough, it is Eugene Charniak himself who can be safely considered one of the leading proponents of an explicit, premeditated turn away from logic to statistical techniques. His area of specialization is natural language processing, and whereas his introductory textbook of 1985 gave an accurate sense of his approach to parsing at the time (as we have seen, write computer programs that, given English text as input, ultimately infer meaning expressed in FOL), this approach was abandoned in favor of purely statistical approaches (Charniak 1993). At the recent AI@50 conference, Charniak boldly proclaimed, in a talk tellingly entitled Why Natural Language Processing is Now Statistical Natural Language Processing, that logicist AI is moribund, and that the statistical approach is the only promising game in town -- for the next 50 years.[7] The chief source of energy and debate at the conference flowed from the clash between Charniak's probabilistic orientation, and the original logicist orientation, upheld at the conference in question by John McCarthy and others.

AI's use of probability theory grows out of the standard form of this theory, which grew directly out of technical philosophy and logic. This form will be familiar to many philosophers, but let's review it quickly now, in order to set a firm stage for making points about the new probabilistic techniques that have energized AI.

Just as in the case of FOL, in probability theory we are concerned with declarative statements, or propositions, to which degrees of belief are applied; we can thus say that both logicist and probabilistic approaches are symbolic in nature. More specifically, the fundamental proposition in probability theory is a random variable, which can be conceived of as an aspect of the world whose status is initially unknown. We usually capitalize the names of random variables, though we reserve p, q, r, ... as such names as well. In a particular murder investigation centered on whether or not Mr. Black committed the crime, the random variable Guilty might be of concern. The detective may be interested as well in whether or not the murder weapon -- a particular knife, let us assume -- belongs to Black. In light of this, we might say that Weapon = true if it does, and Weapon = false if it doesn't. As a notational convenience, we can write weapon and weapon for these two cases, respectively; and we can use this convention for other variables of this type.

The kind of variables we have described so far are Boolean, because their domain is simply {true, false}. But we can generalize and allow discrete random variables, whose values are from any countable domain. For example, PriceTChina might be a variable for the price of (a particular, presumably) tea in China, and its domain might be {1, 2, 3, 4, 5}, where each number here is in US dollars. A third type of variable is continuous; its domain is either the reals, or some subset thereof.

We say that an atomic event is an assignment of particular values from the appropriate domains to all the variables composing the (idealized) world. For example, in the simple murder investigation world introduced just above, we have two Boolean variables, Guilty and Weapon, and there are just four atomic events. Note that atomic events have some obvious properties. For example, they are mutually exclusive, exhaustive, and logically entail the truth or falsity of every proposition. Usually not obvious to beginning students is a fourth property, namely, any proposition is logically equivalent to the disjunction of all atomic events that entail that proposition.

Prior probabilities correspond to a degree of belief accorded a proposition in the complete absence of any other information. For example, if the prior probability of Black's guilt is .2, we write

or simply P(guilty) = .2. It is often convenient to have a notation allowing one to refer economically to the probabilities of all the possible values for a random variable. For example, we can write

as an abbreviation for the five equations listing all the possible prices for tea in China. We can also write

In addition, as further convenient notation, we can write P(Guilty, Weapon) to denote the probabilities of all combinations of values of the relevant set of random variables. This is referred to as the joint probability distribution of Guilty and Weapon. The full joint probability distribution covers the distribution for all the random variables used to describe a world. Given our simple murder world, we have 20 atomic events summed up in the equation

The final piece of the basic language of probability theory corresponds to conditional probabilities. Where p and q are any propositions, the relevant expression is P(p|q), which can be interpreted as the probability of p, given that all we know is q. For example,

says that if the murder weapon belongs to Black, and no other information is available, the probability that Black is guilty is .7.

Andrei Kolmogorov showed how to construct probability theory from three axioms that make use of the machinery now introduced, viz.,

Probabilistic inference consists in computing, from observed evidence expressed in terms of probability theory, posterior probabilities of propositions of interest. For a good long while, there have been algorithms for carrying out such computation. These algorithms precede the resurgence of probabilistic techniques in the 1990s. (Chapter 13 of AIMA presents a number of them.) For example, given the Kolmogorov axioms, here is a straightforward way of computing the probability of any propostion, using the full joint distribution giving the probabilities of all atomic events: Where p is some proposition, let (p) be the disjunction of all atomic events in which p holds. Since the probability of a proposition (i.e., P(p)) is equal to the sum of the probabilities of the atomic events in which it holds, we have an equation that provides a method for computing the probability of any proposition p, viz.,

Unfortunately, there were two serious problems infecting this original probabilistic approach: One, the processing in question needed to take place over paralyzingly large amounts of information (enumeration over the entire distribution is required). And two, the expressivity of the approach was merely propositional. (It was by the way the philosopher Hilary Putnam (1963) who pointed out that there was a price to pay in moving to the first-order level. The issue is not discussed herein.) Everything changed with the advent of a new formalism that marks the marriage of probabilism and graph theory: Bayesian networks (also called belief nets). The pivotal text was (Pearl 1988).

To explain Bayesian networks, and to provide a contrast between Bayesian probabilistic inference, and argument-based approaches that are likely to be attractive to classically trained philosophers, let us build upon the example of Black introduced above. Suppose that we want to compute the posterior probability of the guilt of our murder suspect, Mr. Black, from observed evidence. We have three Boolean variables in play: Guilty, Weapon, and Intuition. Weapon is true or false based on whether or not a murder weapon (the knife, recall) belonging to Black is found at the scene of the bloody crime. The variable Intuition is true provided that the very experienced detective in charge of the case, Watson, has an intuition, without examining any physical evidence in the case, that Black is guilty; intuition holds just in case Watson has no intuition either way. Here is a table that holds all the (eight) atomic events in the scenario so far:

Were we to add the aforeintroduced discrete random variable PriceTChina, we would of course have 40 events, corresponding in tabular form to the preceding table associated with each of the five possible values of PriceTChina. That is, there are 40 events in

Bayesian networks provide a economical way to represent the situation. Such networks are directed, acyclic graphs in which nodes correspond to random variables. When there is a directed link from node Ni to node Nj, we say that Ni is the parent of Nj. With each node Ni there is a corresponding conditional probability distribution

where, of course, Parents(Ni) denotes the parents of Ni. The following figure shows such a network for the case we have been considering. The specific probability information is omitted; readers should at this point be able to readily calculate it using the machinery provided above.

Notice the economy of the network, in striking contrast to the prospect, visited above, of listing all 40 possibilities. The price of tea in China is presumed to have no connection to the murder, and hence the relevant node is isolated. In addition, only some l probability info is included, corresponding to the relevant tables shown in the figure (typically termed a conditional probability table). And yet from a Bayesian network, every entry in the full joint distribution can be easily calculated, as follows. First, for each node/variable Ni we write Ni = ni to indicate an assignment to that node/variable. The conjunction of the specific assignments to every variable in the full joint probability distribution can then be written as

Earlier, we observed that the full joint distribution can be used to infer an answer to queries about the domain. Given this, it follows immediately that Bayesian networks have the same power. But in addition, there are much much efficient methods over such networks for answering queries. These methods, and increasing the expressivity of networks toward the first-order case, are outside the scope of the present entry. Readers are directed to AIMA, or any of the other textbooks affirmed in this entry (see note 8).

Before concluding this section, it is probably worth noting that, from the standpoint of philosophy, a situation such as the murder investigation we have exploited above would often be analyzed into arguments, and strength factors, not into numbers to be crunched by purely arithmetical procedures. For example, in the epistemology of Roderick Chisholm, as presented his Theory of Knowledge (Chisholm 1966, 1977), Detective Watson might classify a proposition like Black committed the murder. as counterbalanced if he was unable to take a find a compelling argument either way, or perhaps probable if the murder weapon turned out to belong to Black. Such categories cannot be found on a continuum from 0 to 1, and they are used in articulating arguments for or against Black's guilt. Argument-based approaches to uncertain and defeasible reasoning are virtually non-existent in AI. One exception is Pollock's approach, covered below. This approach is Chisholmian in nature.

There are a number of ways of carving up AI. By far the most prudent and productive way to summarize the field is to turn yet again to the AIMA text, by any metric a masterful, comprehensive overview of the field.[8]

As Russell and Norvig (2002) tell us in the Preface of AIMA:

The content of AIMA derives, essentially, from fleshing out this picture; that is, corresponds to the different ways of representing the overall function that intelligent agents implement. And there is a progression from the least powerful agents up to the more powerful ones. The following figure gives a high-level view of a simple kind of agent discussed early in the book. (Though simple, this sort of agent corresponds to the architecture of representation-free agents designed and implemented by Rodney Brooks 1991.)

As the book progresses, agents get increasingly sophisticated, and the implementation of the function they represent thus draws from more and more of what AI can currently muster. The following figure gives an overview of an agent that is a bit smarter than the simple reflex agent. This smarter agent has the ability to internally model the outside world, and is therefore not simply at the mercy of what can at the moment be directly sensed.

There are eight parts to AIMA. As the reader passes through these parts, she is introduced to agents that take on the powers discussed in each part. Part I is an introduction to the agent-based view. Part II is concerned with giving an intelligent agent the capacity to think ahead a few steps in clearly defined environtments. Examples here include agents able to successfully play games of perfect information, such as chess. Part III deals with agents that have declarative knowledge and can reason in ways that will be quite familiar to most philosophers and logicians (e.g., knowledge-based agents deduce what actions should be taken to secure their goals). Part IV of the book outfits agents with the power to handle uncertainty by reasoning in probabilistic fashion. In Part VI, agents are given a capacity to learn. The following figure shows the overall structure of a learning agent.

The final set of powers agents are given allow them to communicate. These powers are covered in Part VII.

Philosophers who patiently travel the entire progression of increasingly smart agents will no doubt ask, when reaching the end of Part VII, if anything is missing. Are we given enough, in general, to build an artificial person, or is there enough only to build a mere animal? This question is implicit in the following from Charniak and McDermott (1985):

To their credit, Russell & Norvig, in AIMA's Chapter 27, AI: Present and Future, consider this question, at least to some degree. They do so by considering some challenges to AI that have hitherto not been met. One of these challenges is described by R&N as follows:

This specific challenge is actually merely the foothill before a dizzyingly high mountain that AI must eventually somehow manage to climb. That mountain, put simply, is reading. Despite the fact that, as noted, Part IV of AIMA is devoted to machine learning, AI, as it stands, offers next to nothing in the way of a mechanization of learning by reading. Yet when you think about it, reading is probably the dominant way you learn at this stage in your life. Consider what you're doing at this very moment. Its a good bet that you are reading this sentence because, earlier, you set yourself the goal of learning about the field of AI. Yet the formal models of learning provided in AIMA's Part IV (which are all and only the models at play in AI) cannot be applied to learning by reading.[9] These models all start with a function-based view of learning. According to this view, to learn is almost invariably to produce an underlying function f on the basis of a restricted set of pairs (a1, f(a1)), (a2, f(a2)), ..., (an, f(an)). For example, consider receiving inputs consisting of 1, 2, 3, 4, and 5, and corresponding range values of 1, 4, 9, 16, and 25; the goal is to learn the underlying mapping from natural numbers to natural numbers. In this case, assume that the underlying function is n2, and that you do learn it. While this narrow model of learning can be productively applied to a number of processes, the process of reading isnt one of them. Learning by reading cannot (at least for the foreseeable future) be modeled as divining a function that produces argument-value pairs. Instead, your reading about AI can pay dividends only if your knowledge has increased in the right way, and if that knowledge leaves you poised to be able to produce behavior taken to confirm sufficient mastery of the subject area in question. This behavior can range from correctly answering and justifying test questions regarding AI, to producing a robust, compelling presentation or paper that signals your achievement.

Two points deserve to be made about machine reading. First, it may not be clear to all readers that reading is an ability that is central to intelligence. The centrality derives from the fact that intelligence requires vast knowledge. We have no other means of getting systematic knowledge into a system than to get it in from text, whether text on the web, text in libraries, newspapers, and so on. You might even say that the big problem with AI has been that machines really don't know much compared to humans. That can only be because of the fact that humans read (or hear: illiterate people can listen to text being uttered and learn that way). Either machines gain knowledge by humans manually encoding and inserting knowledge, or by reading and listening. These are brute facts. (We leave aside supernatural techniques, of course. Oddly enough, Turing didn't: he seemed to think ESP should be discussed in connection with the powers of minds and machines. See (Turing 1950.))

Now for the second point. Humans able to read have invariably also learned a language, and learning languages has been modeled in conformity to the function-based approach adumbrated just above (Osherson et al. 1986). However, this doesn't entail that an artificial agent able to read, at least to a significant degree, must have really and truly learned a natural language. AI is first and foremost concerned with engineering computational artifacts that measure up to some test (where, yes, sometimes that test is from the human sphere), not with whether these artifacts process information in ways that match those present in the human case. It may or may not be necessary, when engineering a machine that can read, to imbue that machine with human-level linguistic competence. The issue is empirical, and as time unfolds, and the engineering is pursued, we shall no doubt see the issue settled.

It would seem that the greatest challenges facing AI are ones the field apparently hasn't even come to grips with yet. Ssome mental phenomena of paramount importance to many philosohers of mind and neuroscience are simply missing from AIMA. Two examples are subjective consciousness and creativity. The former is only mentioned in passing in AIMA, but subjective consciousness is the most important thing in our lives -- indeed we only desire to go on living because we wish to go on enjoying subjective states of certain types. Moreover, if human minds are the product of evolution, then presumably phenomenal consciousness has great survival value, and would be of tremendous help to a robot intended to have at least the behavioral repertoire of the first creatures with brains that match our own (hunter-gatherers; see Pinker 1997). Of course, subjective consciousness is largely missing from the sister fields of cognitive psychology and computational cognitive modeling as well.[10]

To some readers, it might seem in the very least tendentious to point to subjective consciousness as a major challenge to AI that it has yet to address. These readers might be of the view that pointing to this problem is to look at AI through a distinctively philosophical prism, and indeed a controversial philosophical standpoint.

But as its literature makes clear, AI measures itself by looking to animals and humans and picking out in them remarkable mental powers, and by then seeing if these powers can be mechanized. Arguably the power most important to humans (the capacity to experience) is nowhere to be found on the target list of most AI researchers. There may be a good reason for this (no formalism is at hand, perhaps), but there is no denying the state of affairs in question obtains, and that, in light of how AI measures itself, that its worrisome.

As to creativity, it's quite remarkable that the power we most praise in human minds is nowhere to be found in AIMA. Just as in (Charniak & McDermott 1985) one cannot find neural in the index, creativity can't be found in the index of AIMA. This is particularly odd because many AI researchers have in fact worked on creativity (especially those coming out of philosophy; e.g., Boden 1994, Bringsjord & Ferrucci 2000).

Although the focus has been on AIMA, any of its counterparts could have been used. As an example, consider Artificial Intelligence: A New Synthesis, by Nils Nilsson. (A synopsis and TOC are available at http://print.google.com/print?id=LIXBRwkibdEC&lpg=1&prev=.) As in the case of AIMA, everything here revolves around a gradual progression from the simplest of agents (in Nilsson's case, reactive agents), to ones having more and more of those powers that distinguish persons. Energetic readers can verify that there is a striking parallel between the main sections of Nilsson's book and AIMA. In addition, Nilsson, like Russell and Norvig, ignores phenomenal consciousness, reading, and creativity. None of the three are even mentioned.

A final point to wrap up this section. It seems quite plausible to hold that there is a certain inevitability to the structure of an AI textbook, and the apparent reason is perhaps rather interesting. In personal conversation, Jim Hendler, a well-known AI researcher who is one of the main innovators behind Semantic Web (Berners-Lee, Hendler, Lassila 2001), an under-development AI-ready version of the World Wide Web, has said that this inevitability can be rather easily displayed when teaching Introduction to AI; here's how. Begin by asking students what they think AI is. Invariably, many students will volunteer that AI is the field devoted to building artificial creatures that are intelligent. Next, ask for examples of intelligent creatures. Students always respond by giving examples across a continuum: simple multi-celluar organisms, insects, rodents, lower mammals, higher mammals (culminating in the great apes), and finally human persons. When students are asked to describe the differences between the creatures they have cited, they end up essentially describing the progression from simple agents to ones having our (e.g.) communicative powers. This progression gives the skeleton of every comprehensive AI textbook. Why does this happen? The answer seems clear: it happens because we cant resist conceiving of AI in terms of the powers of extant creatures with which we are familiar. At least at present, persons, and the creatures who enjoy only bits and pieces of personhood, are -- to repeat -- the measure of AI.

SEP already contains a separate entry entitled Logic and Artificial Intelligence, written by Thomason. This entry is focused on non-monotonic reasoning, and reasoning about time and change; the entry also provides a history of the early days of logic-based AI, making clear the contributions of those who founded the tradition (e.g., John McCarthy and Pat Hayes; see their seminal 1969 paper). Reasoning based on classical deductive logic is monotonic; that is, if , then for all , {} . Commonsense reasoning is not monotonic. While you may currently believe on the basis of reasoning that your house is still standing, if while at work you see on your computer screen that a vast tornado is moving through the location of your house, you will drop this belief. The addition of new information causes previous inferences to fail. In the simpler example that has become an AI staple, if I tell you that Tweety is a bird, you will infer that Tweety can fly, but if I then inform you that Tweety is a penguin, the inference evaporates, as well it should. Non-monotonic (or defeasible) logic includes formalisms designed to capture the mechanisms underlying these kinds of examples.

The formalisms and techniques discussed in Logic and Artificial Intelligence have now reached, as of 2006, a level of impressive maturity -- so much so that in various academic and corporate laboratories, implementations of these formalisms and techniques can be used to engineer robust, real-world software. It is strongly recommend that readers who have assimilated Thomason's entry and have an interest to learn where AI stands in these areas consult (Mueller 2006), which provides, in one volume, integrated coverage of non-monotonic reasoning (in the form, specifically, of circumscription, introduced by Thomason), and reasoning about time and change in the situation and event calculi. (The former calculus is also introduced by Thomason. In the second, timepoints are included, among other things.) The other nice thing about (Mueller 2006) is that the logic used is multi-sorted first-order logic (MSL), which has unificatory power that will be known to and appreciated by many technical philosophers and logicians (Manzano 1996).

In the present entry, three topics of importance in AI not covered in Logic and Artificial Intelligence are mentioned. They are:

Detailed accounts of logicist AI that fall under the agent-based scheme can be found in (Nilsson 1991, Bringsjord & Ferrucci 1998).[11]. The core idea is that an intelligent agent receives percepts from the external world in the form of formulae in some logical system (e.g., first-order logic), and infers, on the basis of these percepts and its knowledge base, what actions should be performed to secure the agent's goals. (This is of course a barbaric simplification. Information from the external world is encoded in formulae, and transducers to accomplish this feat may be components of the agent.)

To clarify things a bit, we consider, briefly, the logicist view in connection with arbitrary logical systems X.[12] We obtain a particular logical system by setting X in the appropriate way. Some examples: If X = I, then we have a system at the level of FOL [following the standard notation from model theory; see e.g. (Ebbinghaus et al. 1984)]. II is second-order logic, and 1 is a small system of infinitary logic (countably infinite conjunctions and disjunctions are permitted). These logical systems are all extensional, but there are intensional ones as well. For example, we can have logical systems corresponding to those seen in standard propositional modal logic (Chellas 1980). One possibility, familiar to many philosophers, would be propositional KT45, or KT45.[13] In each case, the system in question includes a relevant alphabet from which well-formed formulae are constructed by way of a formal grammar, a reasoning (or proof) theory, a formal semantics, and at least some meta-theoretical results (soundness, completeness, etc.). Taking off from standard notation, we can thus say that a set of formulas in some particular logical system X, X, can be used, in conjunction with some reasoning theory, to infer some particular formula X. (The reasoning may be deductive, inductive, abductive, and so on. Logicist AI isn't in the least restricted to any particular mode of reasoning.) To say that such a sitution holds, we write

When the logical system referred to is clear from context, or when we don't care about which logical system is involved, we can simply write

Each logical system, in its formal semantics, will include objects designed to represent ways the world pointed to by formulae in this system can be. Let these ways be denoted by WiX. When we aren't concerned with which logical system is involved, we can simply wrte Wi. To say that such a way models a formula we write

We extend this to a set of formulas in the natural way: Wi means that all the elements of are true on Wi. Now, using the simple machinery weve established, we can describe, in broad strokes, the life of an intelligent agent that conforms to the logicist point of view. This life conforms to the basic cycle that undergirds intelligent agents in the AIMA2e sense.

To begin, we assume that the human designer, after studying the world, uses the language of a particular logical system to give to our agent an initial set of beliefs 0 about what this world is like. In doing so, the designer works with a formal model of this world, W, and ensures that W 0. Following tradition, we refer to 0 as the agent's (starting) knowledge base. (This terminology, given that we are talking about the agent's beliefs, is known to be peculiar, but it persists.) Next, the agent ADJUSTS its knowlege base to produce a new one, 1. We say that adjustment is carried out by way of an operation ; so [0] = 1. How does the adjustment process, , work? There are many possibilities. Unfortunately, many believe that the simplest possibility (viz., [i] equals the set of all formulas that can be deduced in some elementary manner from i) exhausts all the possibilities. The reality is that adjustment, as indicated above, can come by way of any mode of reasoning -- induction, abduction, and yes, various forms of deduction corresponding to the logical system in play. For present purposes, its not important that we carefully enumerate all the options.

The cycle continues when the agent ACTS on the environment, in an attempt to secure its goals. Acting, of course, can cause changes to the environment. At this point, the agent SENSES the environment, and this new information 1 factors into the process of adjustment, so that [1 1] = 2. The cycle of SENSES ADJUSTS ACTS continues to produce the life 0, 1, 2, 3, ... of our agent.

It may strike you as preposterous that logicist AI be touted as an approach taken to replicate all of cognition. Reasoning over formulae in some logical system might be appropriate for computationally capturing high-level tasks like trying to solve a math problem (or devising an outline for an entry in the Stanford Encyclopedia of Philosophy), but how could such reasoning apply to tasks like those a hawk tackles when swooping down to capture scurrying prey? In the human sphere, the task successfully negotiated by athletes would seem to be in the same category. Surely, some will declare, an outfielder chasing down a fly ball doesnt prove theorems to figure out how to pull off a diving catch to save the game!

Needless to say, such a declaration has been carefully considered by logicists. For example, Rosenschein and Kaelbling (1986) describe a method in which logic is used to specify finite state machines. These machines are used at run time for rapid, reactive processing. In this approach, though the finite state machines contain no logic in the traditional sense, they are produced by logic and inference. Recently, real robot control via first-order theorem proving has been demonstrated by Amir and Maynard-Reid (1999, 2000, 2001). In fact, you can download version 2.0 of the software that makes this approach real for a Nomad 200 mobile robot in an office environment. Of course, negotiating an office environment is a far cry from the rapid adjustments an outfielder for the Yankees routinely puts on display, but certainly its an open question as to whether future machines will be able to mimic such feats through rapid reasoning. The question is open if for no other reason than that all must concede that the constant increase in reasoning speed of first-order theorem provers is breathtaking. (For up-to-date news on this increase, visit and monitor the TPTP site.) There is no known reason why the software engineering in question cannot continue to produce speed gains that would eventually allow an artificial creature to catch a fly ball by processing information in purely logicist fashion.

Now we come to the second topic related to logicist AI that warrants mention herein: common logic and the intensifying quest for interoperability between logic-based systems using different logics. Only a few brief comments are offered. Readers wanting more can explore the links provided in the course of the summary.

To begin, please understand that AI has always been very much much at the mercy of the vicissitudes of funding provided to researchers in the field by the United States Department of Defense (DoD). (The inaugural 1956 workshop was funded by DARPA, and many representatives from this organization attended AI@50.) Its this fundamental fact that causally contributed to the temporary hibernation of AI carried out on the basis of artificial neural networks: When Minsky and Pappert (1959) bemoaned the limitations of neural networks, it was the funding agencies that held back money for research based upon them. Since the late 1950's it's safe to say that the DoD has sponsored the development of many logics intended to advance AI and lead to helpful applications. Recently, it has occurred to many in the DoD that this sponsorship has led to a plethora of logics between which no translation can occur. In short, the situation is a mess, and now real money is being spent to try to fix it, through standardization and machine translation (between logical, not natural, languages).

The standardization is coming chiefly through what is known as Common Logic (CL), and variants thereof. (CL is soon to be an ISO standard. ISO is the International Standards Organization.) Philosophers interested in logic, and of course logicians, will find CL to be quite fascinating. (From an historical perspective, the advent of CL is interesting in no small part because the person spearheading it is none other than Pat Hayes, the same Hayes who, as we have seen, worked with McCarthy to establish logicist AI in the 1960s. Though Hayes was not at the original 1956 Dartmouth conference, he certainly must be regarded as one of the founders of contemporary AI.) One of the interesting things about CL, at least as I see it, is that it signifies a trend toward the marriage of logics, and programming languages and environments. Another system that is a logic/programming hybrid is Athena, which can be used as a programming language, and is at the same time a form of MSL. Athena is known as a denotational proof language (Arkoudas 2000).

How is interoperability between two systems to be enabled by CL? Suppose one of these systems is based on logic L, and the other on L'. (To ease exposition, assume that both logics are first-order.) The idea is that a theory L, that is, a set of formulae in L, can be translated into CL, producing CL, and then this theory can be translated into L'. CL thus becomes an inter lingua. Note that what counts as a well-formed formula in L can be different than what counts as one in L'. The two logics might also have different proof theories. For example, inference in L might be based on resolution, while inference in L' is of the natural deduction variety. Finally, the symbol sets will be different. Despite these differences, courtesy of the translations, desired behavior can be produced across the translation. That, at any rate, is the hope. The technical challenges here are immense, but federal monies are increasingly available for attacks on the problem of interoperability.

Now for the third topic in this section: what can be called encoding down. The technique is easy to understand. Suppose that we have on hand a set of first-order axioms. As is well-known, the problem of deciding, for arbitrary formula , whether or not it's deducible from is Turing-undecidable: there is no Turing machine or equivalent that can correctly return Yes or No in the general case. However, if the domain in question is finite, we can encode this problem down to the propositional calculus. An assertion that all things have F is of course equivalent to the assertion that Fa, Fb, Fc, as long as the domain contains only these three objects. So here a first-order quantified formula becomes a conjunction in the propositional calculus. Determining whether such conjunctions are provable from axioms themselves expressed in the propositional calculus is Turing-decidable, and in addition, in certain clusters of cases, the check can be done very quickly in the propositional case; very quickly. Readers interested in encdoing down to the propositional calculus should consult recent DARPA-sponsored work by Bart Selman. Please note that the target of encoding down doesn't need to be the propositional calculus. Because it's generally harder for machines to find proofs in an intensional logic than in straight first-order logic, it is often expedient to encode down the former to the latter. For example, propositional modal logic can be encoded in multi-sorted logic (a variant of FOL); see (Arkoudas & Bringsjord 2005).

Its tempting to define non-logicist AI by negation: an approach to building intelligent agents that rejects the distinguishing features of logicist AI. Such a shortcut would imply that the agents engineered by non-logicist AI researchers and developers, whatever the virtues of such agents might be, cannot be said to know that -- for the simple reason that, by negation, the non-logicist paradigm would have not even a single declarative proposition that is a candidate for . However, this isn't a particularly enlightening way to define non-symbolic AI. A more productive approach is to say that non-symbolic AI is AI carried out on the basis of particular formalisms other than logical systems, and to then enumerate those formalisms. It will turn out, of course, that these formalisms fail to include knowledge in the normal sense. (In philosophy, as is well-known, the normal sense is one according to which if p is known, p is a declarative statement.)

From the standpoint of formalisms other than logical systems, non-logicist AI can be partitioned into symbolic but non-logicist approaches, and connectionist/neurocomputational approaches. (AI carried out on the basis of symbolic, declarative structures that, for readability and ease of use, are not treated directly by researchers as elements of formal logics, does not count. In this category fall traditional semantic networks, Schank's (1972) conceptual dependency scheme, and other schemes.) The former approaches, today, are probabilistic, and are based on the formalisms (Bayesian networks) covered above. The latter approaches are based, as we have noted, on formalisms that can be broadly termed neurocomputational. Given our space constraints, only one of the formalisms in this category is described here (and briefly at that): the aforementioned artificial neural networks.[14]

Neural nets are composed of units or nodes designed to represent neurons, which are connected by links designed to represent dendrites, each of which has a numeric weight.

It is usually assumed that some of the units work in symbiosis with the external environment; these units form the sets of input and output units. Each unit has a current activation level, which is its output, and can compute, based on its inputs and weights on those inputs, its activation level at the next moment in time. This computation is entirely local: a unit takes account of but its neighbors in the net. This local computation is calculated in two stages. First, the input function, ini, gives the weighted sum of the unit's input values, that is, the sum of the input activations multiplied by their weights:

As you might imagine, there are many different kinds of neural networks. The main distinction is between feed-forward and recurrent networks. In feed-forward networks like the one pictured immediately above, as their name suggests, links move information in one direction, and there are no cycles; recurrent networks allow for cycling back, and can become rather complicated. In general, though, it now seems safe to say that neural networks are fundamentally plagued by the fact that while they are simple, efficient learning algorithms are possible, but when they are multi-layered and thus sufficiently expressive to represent non-linear functions, they are very hard to train.

Perhaps the best technique for teaching students about neural networks in the context of other statistical learning formalisms and methods is to focus on a specific problem, preferably one that seems unnatural to tackle using logicist techniques. The task is then to seek to engineer a solution to the problem, using any and all techniques available. One nice problem is handwriting recognition (which also happens to have a rich philosophical dimension; see e.g. Hofstadter & McGraw 1995). For example, consider the problem of assigning, given as input a handwritten digit d, the correct digit, 0 through 9. Because there is a database of 60,000 labeled digits available to researchers (from the National Institute of Science and Technology), this problem has evolved into a benchmark problem for comparing learning algorithms. It turns out that kernel machines currently reign as the best approach to the problem -- despite the fact that, unlike neural networks, they require hardly any prior iteration. A nice summary of fairly recent results in this competition can be found in Chapter 20 of AIMA.

Readers interested in AI (and computational cognitive science) pursued from an overtly brain-based orientation are encouraged to explore the work of Rick Granger (2004a, 2004b) and researchers in his Brain Engineering Laboratory and W.H. Neukom Institute for Computational Sciences. The contrast between the dry, logicist AI started at the original 1956 conference, and the approach taken here by Granger and associates (in which brain circuitry is directly modeled) is remarkable.

What, though, about deep, theoretical integration of the main paradigms in AI? Such integration is at present only a possibility for the future, but readers are directed to the research of some striving for such integration. For example: Sun (1994, 2002) has been working to demonstrate that human cognition that is on its face symbolic in nature (e.g., professional philosophizing in the analytic tradition, which deals explicitly with arguments and definitions carefully symbolized) can arise from cognition that is neurocomputational in nature. Koller (1997) has investigated the marriage between probability theory and logic. And, in general, the very recent arrival of so-called human-level AI is being led by theorists seeking to genuinely integrate the three paradigms set out above (e.g., Cassimatis 2006).

Notice that the heading for this section isn't Philosophy of AI. Well get to that category momentarily. Philosophical AI is AI, not philosophy; but its AI rooted in and flowing from, philosophy. Before we ostensively characterize Philosophical AI courtesy of a particular research program, let us consider the view that AI is in fact simply philosophy, or a part thereof.

Daniel Dennett (1979) has famously claimed not just that there are parts of AI intimately bound up with philosophy, but that AI is philosophy (and psychology, at least of the cognitive sort). (He has made a parallel claim about Artificial Life (Dennett 1998).) This view will turn out to be incorrect, but the reasons why its wrong will prove illuminating, and our discussion will pave the way for a discussion of Philosophical AI.

What does Dennett say, exactly? This:

Elsewhere he says his view is that AI should be viewed as a most abstract inquiry into the possibility of intelligence or knowledge (Dennett 1979, 64).

Read more from the original source:

Artificial Intelligence - Minds & Machines Home

Artificial Intelligence News & Articles – IEEE Spectrum

Layout Type:

Georgia Tech researchers want to build humanity and personality into human-robot dialogues29Oct

Can computers be creative?23Oct

Advertisement

Watson goes west looking to make some new friends. It can start in its neighborhood9Oct

The former DARPA program manager discusses what he's going to do next11Sep

First step is a $50 million collaboration with MIT and Stanford, led by ex-DARPA program manager Gill Pratt4Sep

Japanese researchers show that children can act like horrible little brats towards robots6Aug

If autonomous weapons are capable of reducing casualties, there may exist a moral imperative for their use5Aug

Autonomous weapons could lead to low-cost micro-robots that can be deployed to anonymously kill thousands. That's just one reason why they should be banned3Aug

What we really need is a way of making autonomous armed robots ethical, because were not going to be able to prevent them from existing29Jul

Physical robots mutate and crossbreed to evolve towards the most efficient mobility genome21Jul

Select research aimed at keeping AI from destroying humanity has received millions from the Silicon Valley pioneer1Jul

The mathematician and cryptanalyst explained his famous test of computer intelligence during two BBC radio broadcasts in the early 1950s30Jun

Its time to have a global conversation about how AI should be developed17Jun

Advertisement

A deep learning system works 60 times faster than previous methods28May

Computer scientists take valuable lessons from a human vs. AI competition of no-limit Texas hold'em13May

A fleet of little robot submarines is learning to cooperatively perform tasks underwater5May

Your robot butler is now closer than ever21Apr

Googles patent for generating robot personalities from cloud data is superfluous, and could make it more difficult for companies social robotics companies to innovate8Apr

What happens when a computer vision guy thinks someone is trying to rob him? He uses autonomous vehicle technology to watch his house1Apr

What will we call driving, when we no longer drive?18Mar

With a new robot in the works called Tega, MIT's Personal Robots Group wants to get social robots out into the world16Mar

Researchers have proposed a Visual Turing Test in which computers would answer increasingly complex questions about a scene10Mar

The AI expert says autonomous robots can help us with tasks and decisions but they need not do everything6Mar

A project to train remote workers to teleoperate robot servants looked promising. So why was it abandoned?4Mar

A quantum computing team hired by Google has built the first system capable of correcting its own errors4Mar

Deep learning artificial intelligence that plays Space Invaders could inspire better search, translation, and mobile apps25Feb

Making computers unbeatable at Texas Hold 'em could lead to big breakthroughs in artificial intelligence25Feb

A new breed of AI could be smart enough to adapt to any set of rules in games and real life23Feb

A IBM's supercomputer unleashes an army of cuddly green dinosaurs with the intelligence of the cloud20Feb

The Deep Learning expert explains how convolutional nets work, why Facebook needs AI, what he dislikes about the Singularity, and more18Feb

Google engineers explain the technology behind their autonomous vehicle and show videos of the road tests18Oct2011

Big-data boondoggles and brain-inspired chips are just two of the things were really getting wrong20Oct2014

See the original post:

Artificial Intelligence News & Articles - IEEE Spectrum

Artificial Intelligence Definition – Tech Terms

Home : Technical Terms : Artificial Intelligence Definition

Artificial Intelligence, or AI, is the ability of a computer to act like a human being. It has several applications, including software simulations and robotics. However, artificial intelligence is most commonly used in video games, where the computer is made to act as another player.

Nearly all video games include some level of artificial intelligence. The most basic type of AI produces characters that move in standard formations and perform predictable actions. More advanced artificial intelligence enables computer characters to act unpredictably and make different decisions based on a player's actions. For example, in a first-person shooter (FPS), an AI opponent may hide behind a wall while the player is facing him. When the player turns away, the AI opponent may attack. In modern video games, multiple AI opponents can even work together, making the gameplay even more challenging.

Artificial intelligence is used in a wide range of video games, including board games, side-scrollers, and 3D action games. AI also plays a large role in sports games, such as football, soccer, and basketball games. Since the competition is only as good as the computer's artificial intelligence, the AI is a crucial aspect of a game's playability. Games that lack a sophisticated and dynamic AI are easy to beat and therefore are less fun to play. If the artificial intelligence is too good, a game might be impossible to beat, which would be discouraging for players. Therefore, video game developers often spend a long time creating the perfect balance of artificial intelligence to make the games both challenging and fun to play. Most games also include different difficulty levels, such as Easy, Medium, and Hard, which allows players to select an appropriate level of artificial intelligence to play against.

Updated: December 1, 2010

http://techterms.com/definition/artificial_intelligence

Go here to read the rest:

Artificial Intelligence Definition - Tech Terms

Urban Dictionary: artificial intelligence

Natural blonde who dyed his/her hair in a dark(er) color.

That girl went to a hairdresser to get some artificial intelligence.

A form of life which can think, decide, and have "feelings" that was created by another species.

"The 'matrix' is an artificial intelligence"

artifician-intelligence: This idea of AI was practiced when i was 4 or 5 year old in 1953 when the concept of AI was not even thought about since computer was at its infancey. AI in fact mimic the experience of human being by puting human knowledge and experiences inot a computer by humans so nurses can access doctor's mind via computer when prescribing simple task of givng a paracetamol as a pain killer for flu for instance. What i did as a 5 year old was i got fed up waking from ny neibour's house 100 yards away , one day i decided to walk closing my eyes unsing the experiences recorded in my memory.

Artificial-intelligence: Nurses in hospitals uses AI

Is in fact an extended degree of responsiveness on the part of a machine to fulfill it's purpose. It is a toaster that notes that the toast is darkening very quickly but the setting is on light brown. It is a car's service light coming on because an unexpected condition that might otherwise be missed by the owner could very well cost him thousands in repairs.

This new line of car has an artificial intelligence built in designed to anticipate how a passenger could be harmed and counter in such a way as to make nearly any accident on the road not just survivable but nearly harmless to the passengers.

A new found intelligence possessed by somebody with a smartphone who can now do a Wikipedia lookup and then spout the Wiki info.

Now that Idiot Jim has a smart phone he has Artificial Intelligence.

Read the original post:

Urban Dictionary: artificial intelligence

What is Psoriasis? STELARA (ustekinumab)

STELARA is a prescription medicine approved to treat adults 18 years and older with moderate or severe plaque psoriasis that involves large areas or many areas of their body, who may benefit from taking injections or pills (systemic therapy) or phototherapy (treatment using ultraviolet light alone or with pills).

STELARA is a prescription medicine approved to treat adults 18 years and older with active psoriatic arthritis, either alone or with methotrexate.

STELARA is a 45 mg or 90 mg injection given under the skin as directed by your doctor at weeks 0, 4, and every 12 weeks thereafter. It is administered by a healthcare provider or self-injected only after proper training.

STELARA can make you more likely to get infections or make an infection that you have worse. People who have a genetic problem where the body does not make any of the proteins interleukin 12 (IL-12)Proteins that increase the growth and function of white blood cells, which are found in your immune system. and interleukin 23 (IL-23)Proteins that increase the growth and function of white blood cells, which are found in your immune system. are at a higher risk for certain serious infections that can spread throughout the body and cause death. It is not known if people who take STELARA will get any of these infections because of the effects of STELARA on these proteins.

Cancers

STELARA may decrease the activity of your immune systemA system inside the body that protects against germs and infections. and increase your risk for certain types of cancer. Tell your doctor if you have ever had any type of cancer. Some people who had risk factors for skin cancer developed certain types of skin cancers while receiving STELARA. Tell your doctor if you have any new skin growths.

Reversible posterior leukoencephalopathy syndrome (RPLS)

RPLS is a rare condition that affects the brain and can cause death. The cause of RPLS is not known. If RPLS is found early and treated, most people recover. Tell your doctor right away if you have any new or worsening medical problems including: headache, seizures, confusion, and vision problems.

Serious Allergic Reactions

Serious allergic reactions can occur. Get medical help right away if you have any symptoms such as: feeling faint, swelling of your face, eyelids, tongue, or throat, trouble breathing, throat or chest tightness, or skin rash.

Before receiving STELARA, tell your doctor if you:

When prescribed STELARA:

You are encouraged to report negative side effects of prescription drugs to the FDA. Visit http://www.fda.gov/medwatch or call 1-800-FDA-1088.

Please read the Full Prescribing Information, including the Medication Guide for STELARA, and discuss any questions you have with your doctor.

Requires Adobe Reader. Click here to download.

003172-130920

Excerpt from:

What is Psoriasis? STELARA (ustekinumab)

Psoriasis Condition Center – Health.com

WEEKLY NEWSLETTER

Free Healthy Living Email Newsletter

Get the latest health, fitness, anti-aging, and nutrition news, plus special offers, insights and updates from Health.com!

Psoriasis Journey

By Maureen SalamonHealthDay Reporter FRIDAY, Oct. 9, 2015 (HealthDay News) Psoriasis and cold sores top the list of stigmatized skin conditions, a new survey indicates, but experts say much of the ill will directed at sufferers is misguided. Surveying 56 people, Boston researchers found that nearly 61 percent wrongly thought psoriasis which produces widespread, scaly [...]

By Steven ReinbergHealthDay Reporter THURSDAY, Oct. 8, 2015 (HealthDay News) The skin disorder psoriasis appears linked with artery inflammation, raising the odds for heart disease, a new study says. As the amount of psoriasis increases, the amount of blood vessel inflammation increases, said senior investigator Dr. Nehal Mehta, a clinical investigator with the U.S. National Heart, [...]

Regardless of severity, patients with the often disfiguring skin condition psoriasis face an elevated risk for depression, new research suggests.

By Steven ReinbergHealthDay Reporter WEDNESDAY, Sept. 30, 2015 (HealthDay News) Two experimental drugs show promise in treating psoriasis and a related condition, psoriatic arthritis, new studies report. The drugs, brodalumab and secukinumab (Cosentyx), represent a new approach to treatment, said Michael Siegel, director of research programs at the National Psoriasis Foundation. These studies show how targeting [...]

People with psoriasis may be twice as likely to experience depression as those without the common skin condition, regardless of its severity, a new study suggests.

More:

Psoriasis Condition Center - Health.com

Psoriasis: MedlinePlus enciclopedia mdica

Menter A, Gottlieb A, Feldman SR, Voorhees ASV, Leonardi CL, Gordon KB, et al. Guidelines for the management of psoriasis and psoriatic arthritis. Section 1. Overview of psoriasis and guidelines of care for the treatment of psoriasis with biologics.Menter A, Gottlieb A, Feldman SR, Voorhees ASV, Leonardi CL, Gordon KB, et al. Guidelines for the management of psoriasis and psoriatic arthritis. Section 1. Overview of psoriasis and guidelines of care for the treatment of psoriasis with biologics. J Am Acad Dermatol. 2008;5:826-850.

Menter A, Korman NJ, Elmets CA, Feldman SR, Gelfand JM, Gordon KB, et al. American Academy of Dermatology guidelines of care for the management of psoriasis and psoriatic arthritis. Section 3. Guidelines of care for the management and treatment of psoriasis with topical therapies.Menter A, Korman NJ, Elmets CA, Feldman SR, Gelfand JM, Gordon KB, et al. American Academy of Dermatology guidelines of care for the management of psoriasis and psoriatic arthritis. Section 3. Guidelines of care for the management and treatment of psoriasis with topical therapies. J Am Acad Dermatol. 2009;60:643-659.

Menter A, Korman NJ, Elments CA, et al. Guidelines of care for the management of psoriasis and psoriatic arthritis.Section 5. Guidelines of care for the treatment of psoriasis with phototherapy and photochemotherapy.Menter A, Korman NJ, Elments CA, et al. Guidelines of care for the management of psoriasis and psoriatic arthritis.Section 5. Guidelines of care for the treatment of psoriasis with phototherapy and photochemotherapy. J Am Acad Dermatol. 2011;1:114-135. Available at: http://www.ncbi.nlm.nih.gov/pubmed/19811850

Psoriasis. Alvero R, Ferri FF, Fort GG, et al, eds. In:Psoriasis. Alvero R, Ferri FF, Fort GG, et al, eds. In: Ferri's Clinical Advisor 2015. 1st ed. Philadelphia, PA: Elsevier Mosby; 2014:section I.

Stern RS. Psoralen and ultraviolet a light therapy for psoriasis.Stern RS. Psoralen and ultraviolet a light therapy for psoriasis. N Engl J Med. 2007;357(7):682-690. Available at: http://www.ncbi.nlm.nih.gov/pubmed/17699818

Weigle N, McBane S. Psoriasis.Weigle N, McBane S. Psoriasis. Am Fam Physician. 2013:87(9);626-633.

Read the original post:

Psoriasis: MedlinePlus enciclopedia mdica

Psoriasis. DermNet NZ

Facts about the skin from DermNet New Zealand Trust. Topic index: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Psoriasis is a chronic inflammatory skin condition characterised by clearly defined, red and scaly plaques (thickened skin). It is classified into several subtypes.

Psoriasis affects 24% of males and females. It can start at any age including childhood, with peaks of onset at 1525 years and 5060 years. It tends to persist lifelong, fluctuating in extent and severity. It is particularly common in Caucasians, but may affect people of any race. About one third of patients with psoriasis have family members with psoriasis.

Psoriasis is multifactorial. It is classified as an immune-mediated inflammatory disease (IMID).

Genetic factors are important. An individual's genetic profile influences their type of psoriasis and its response to treatment.

Genome-wide association studies report that HLA-Cw6 is associated with early onset psoriasis and guttate psoriasis. This major histocompatibility complex is not associated with arthritis, nail dystrophy or late onset psoriasis.

Theories about the causes of psoriasis need to explain why the skin is red, inflamed and thickened. It is clear that immune factors and inflammatory cytokines (messenger proteins) such is IL1 and TNF are responsible for the clinical features of psoriasis. Current theories are exploring the TH17 pathway and release of the cytokine IL17A.

Psoriasis usually presents with symmetrically distributed, red, scaly plaques with well-defined edges. The scale is typically silvery white, except in skin folds where the plaques often appear shiny and they may have a moist peeling surface. The most common sites are scalp, elbows and knees, but any part of the skin can be involved. The plaques are usually very persistent without treatment.

Itch is mostly mild but may be severe in some patients, leading to scratching and lichenification (thickened leathery skin with increased skin markings). Painful skin cracks or fissures may occur.

When psoriatic plaques clear up, they may leave brown or pale marks that can be expected to fade over several months.

Certain features of psoriasis can be categorised to help determine appropriate investigations and treatment pathways. Overlap may occur.

Generalised pustulosis and localised palmoplantar pustulosis are no longer classified within the psoriasis spectrum.

Patients with psoriasis are more likely than other people to have other health conditions listed here.

Psoriasis is diagnosed by its clinical features. If necessary, diagnosis is supported by typical skin biopsy findings.

Medical assessment entails a careful history, examination, questioning about effect of psoriasis on daily life, and evaluation of comorbid factors.

Validated tools used to evaluate psoriasis include:

The severity of psoriasis is classified as mild in 60% of patients, moderate in 30% and severe in 10%.

Evaluation of comorbidities may include:

Patients with psoriasis should ensure they are well informed about their skin condition and its treatment. There are benefits from not smoking, avoiding excessive alcohol and maintaining optimal weight.

Mild psoriasis is generally treated with topical agents alone. Which treatment is selected may depend on body site, extent and severity of the psoriasis.

Most psoriasis centres offer phototherapy with ultraviolet (UV) radiation, often in combination with topical or systemic agents. Types of phototherapy include

Moderate to severe psoriasis warrants treatment with a systemic agent and/or phototherapy. The most common treatments are:

Other medicines occasionally used for psoriasis include:

Systemic corticosteroids are best avoided due to risk of severe withdrawal flare of psoriasis and adverse effects.

Biologics or targeted therapies are reserved for conventional treatment-resistant severe psoriasis, mainly because of expense, as side effects compare favourably with other systemic agents. These include:

See the DermNet NZ bookstore

Author:Hon A/Prof Amanda Oakley, Hamilton, New Zealand. Revised and updated, August 2014.

Read the original:

Psoriasis. DermNet NZ

Psoriasis: Healthwise Medical Information on eMedicineHealth

Psoriasis (say "suh-RY-uh-sus") is a long-term (chronic) skin problem that causes skin cells to grow too quickly, resulting in thick, white, silvery, or red patches of skin.

Normally, skin cells grow gradually and flake off about every 4 weeks. New skin cells grow to replace the outer layers of the skin as they shed.

But in psoriasis, new skin cells move rapidly to the surface of the skin in days rather than weeks. They build up and form thick patches called plaques (say "plax"). The patches range in size from small to large. They most often appear on the knees, elbows, scalp, hands, feet, or lower back. Psoriasis is most common in adults. But children and teens can get it too.

Having psoriasis can be embarrassing, and many people, especially teens, avoid swimming and other situations where patches can show. But there are many types of treatment that can help keep psoriasis under control.

Experts believe that psoriasis occurs when the immune system overreacts, causing inflammation and flaking of skin. In some cases, psoriasis runs in families.

People with psoriasis often notice times when their skin gets worse. Things that can cause these flare-ups include a cold and dry climate, infections, stress, dry skin, and taking certain medicines.

Psoriasis isn't contagious. It can't be spread by touch from person to person.

Symptoms of psoriasis appear in different ways. Psoriasis can be mild, with small areas of rash. When psoriasis is moderate or severe, the skin gets inflamed with raised red areas topped with loose, silvery, scaling skin. If psoriasis is severe, the skin becomes itchy and tender. And sometimes large patches form and may be uncomfortable. The patches can join together and cover large areas of skin, such as the entire back.

In some people, psoriasis causes joints to become swollen, tender, and painful. This is called psoriatic arthritis (say "sor-ee-AT-ik ar-THRY-tus"). This arthritis can also affect the fingernails and toenails, causing the nails to pit, change color, and separate from the nail bed. Dead skin may build up under the nails.

Symptoms often disappear (go into remission), even without treatment, and then return (flare up).

A doctor can usually diagnose psoriasis by looking at the patches on your skin, scalp, or nails. Special tests aren't usually needed.

Most cases of psoriasis are mild, and treatment begins with skin care. This includes keeping your skin moist with creams and lotions. These are often used with other treatments including shampoos, ultraviolet light, and medicines your doctor prescribes.

In some cases, psoriasis can be hard to treat. You may need to try different combinations of treatments to find what works for you. Treatment for psoriasis may continue for a lifetime.

Skin care at home can help control psoriasis. Follow these tips to care for psoriasis:

It's also important to avoid those things that can cause psoriasis symptoms to flare up or make the condition worse. Things to avoid include:

Studies have not found that specific diets can cure or improve the condition, even though some advertisements claim to. For some people, not eating certain foods helps their psoriasis. Most doctors recommend that you eat a balanced diet to be healthy and stay at a healthy weight.

Continued here:

Psoriasis: Healthwise Medical Information on eMedicineHealth

Psoriasis – Symptoms, Causes, Treatments – Healthgrades

Psoriasis is a chronic skin disorder marked by raised areas of thickened skin and lesions made up of dead skin cells. Psoriasis results from an abnormal process in which new skin cells are made faster than old skin cells are cast off. Psoriasis is linked to an abnormal response of the immune system that causes inflammation. Psoriasis is not contagious.

Symptoms of psoriasis occur in outbreaks and include itchy, red or pink patches of thickened skin that are covered with whitish scales. Psoriasis most often affects the knees, elbows, lower back, and scalp.

Find a Great Dermatologist Near You

There currently is no cure for psoriasis, but the condition can be controlled to minimize outbreaks with an individualized treatment plan that includes lifestyle changes and medications.

Complications of psoriasis can be serious. Complications include psoriatic arthritis and a secondary bacterial infection or fungal infection of the psoriasis rash. Psoriasis is also associated with atherosclerosis, diabetes, and inflammatory bowel disease. Seek prompt medical care if you have symptoms of psoriasis. Early diagnosis and treatment can help reduce the risk for complications of psoriasis and associated conditions.

See the article here:

Psoriasis - Symptoms, Causes, Treatments - Healthgrades

CDC – Psoriasis Home Page – Psoriasis

What is psoriasis?

Psoriasis is a chronic autoimmune skin disease that speeds up the growth cycle of skin cells.

Psoriasis causes patches of thick red skin and silvery scales. Patches are typically found on the elbows, knees, scalp, lower back, face, palms, and soles of feet, but can affect other places (fingernails, toenails, and mouth). The most common type of psoriasis is called plaque psoriasis. Psoriatic arthritis is an inflammatory type of arthritis that eventually occurs in 10% to 20% of people with psoriasis. It is different from more common types of arthritis (such as osteoarthritis or rheumatoid arthritis) and is thought to be related to the underlying problem of psoriasis. Psoriasis and psoriatic arthritis are sometimes considered together as psoriatic disease.

Anyone can get psoriasis. It occurs mostly in adults, but children can also get it. Men and women seem to have equal risk.

Psoriasis is not contagious. This means you cannot get psoriasis from contact (e.g., touching skin patches) with someone who has it.

Psoriasis is an autoimmune disease, meaning that part of the bodys own immune system becomes overactive and attacks normal tissues in the body.

Psoriasis often has a typical appearance that a primary care doctor can recognize, but it can be confused with other skin diseases (like eczema), so a dermatologist (skin doctor) is often the best doctor to diagnose it. The treatment of psoriasis usually depends on how much skin is affected, how bad the disease is (e.g., having many or painful skin patches), or the location (especially the face). Treatments range from creams and ointments applied to the affected areas to ultraviolet light therapy to drugs (such as methotrexate). Many people who have psoriasis also have serious health conditions such as diabetes, heart disease, and depression.

Psoriatic arthritis has many of the same symptoms as other types of arthritis, so a rheumatologist (arthritis doctor) is often the best doctor to diagnose it. The treatment of psoriatic arthritis usually involves the use of drugs (such as methotrexate).

Psoriatic disease (when a person has psoriasis or psoriatic arthritis) may be treated with drugs (such as methotrexate) or a combination of drugs and creams or ointments.

Efforts to address psoriasis and psoriatic arthritis have typically focused on studying and treating individual patients and on clinical and biomedical research. In 2010, CDC worked with experts in psoriasis, psoriatic arthritis, and public health to develop a public health perspective that considers how these conditions affect the entire population. The resulting report is Developing and Addressing the Public Health Agenda for Psoriasis and Psoriatic Arthritis (Agenda)[PDF - 380.44KB]. You can read a short article about the agenda in The American Journal of Preventive Medicine.

CDCs National Health and Nutrition Examination Survey (NHANES) has also included questions about psoriasis to learn more about psoriasis in the United States, which can help in public health research, especially in providing national estimates of how many people have psoriasis (prevalence).

What are other sources for information of psoriasis and psoriatic arthritis?

Continue reading here:

CDC - Psoriasis Home Page - Psoriasis

Raspberry Pi Supercomputer Guide Steps

Return to http://www.soton.ac.uk/~sjc/raspberrypi

View video at: http://www.youtube.com/watch?v=Jq5nrHz9I94

Prof Simon Cox

Computational Engineering and Design Research Group

Faculty of Engineering and the Environment

University of Southampton, SO17 1BJ, UK.

V0.2: 8th September 2012

V0.3: 30th November 2012 [Updated with less direct linking to MPICH2 downloads]

V0.4: 9th January 2013 [Updated step 33]

First steps to get machine up

1. Get image from

http://www.raspberrypi.org/downloads

I originally used: 2012-08-16-wheezy-raspbian.zip

Updated 30/11/12: 2012-10-28-wheezy-raspbian.zip

My advice is to to check the downloads page on raspberrypi.org and use the latest version.

2. Use win32 disk imager to put image onto an SD Card (or on a Mac e.g. Disk Utility/ dd)

http://www.softpedia.com/get/CD-DVD-Tools/Data-CD-DVD-Burning/Win32-Disk-Imager.shtml

You will use the Write option to put the image from the disk to your card

3. Boot on Pi

4. Expand image to fill card using the option on screen when you first boot. If you dont do this on first boot, then you need to use

$ sudo raspi-config

http://elinux.org/RPi_raspi-config

5. Log in and change the password

http://www.simonthepiman.com/beginners_guide_change_my_default_password.php

$ passwd

6. Log out and check that you typed it all OK (!)

$ exit

7. Log back in again with your new password

Building MPI so we can run code on multiple nodes

8. Refresh your list of packages in your cache

$ sudo apt-get update

9. Just doing this out of habit, but note not doing any more than just getting the list (upgrade is via sudo apt-get upgrade).

10. Get Fortran after all what is scientific programming without Fortran being a possibility?

$ sudo apt-get install gfortran

11. Read about MPI on the Pi. This is an excellent post to read just to show you are going to make it by the end, but dont type or get anything just yet- we are going to build everything ourselves:

http://westcoastlabs.blogspot.co.uk/2012/06/parallel-processing-on-pi-bramble.html

Note there are a few things to note here

a) Since we put Fortran in we are good to go without excluding anything

b) The packages here are for armel and we need armhf in this case so we are going to build MPI ourselves

12. Read a bit more before you begin:

http://www.mcs.anl.gov/research/projects/mpich2/documentation/files/mpich2-1.4.1-installguide.pdf

Note: As the version of MPICH2 updates, you are better to go to:

Guides

and get the latest installers Guide.

We are going to follow the steps from 2.2 (from the Quick Start Section) in the guide.

13. Make a directory to put the sources in

$ mkdir /home/pi/mpich2

$ cd ~/mpich2

14. Get MPI sources from Argonne.

$ wget http://www.mcs.anl.gov/research/projects/mpich2/downloads/tarballs/1.4.1p1/mpich2-1.4.1p1.tar.gz

[Note that as the MPI source updates, you can navigate to:

http://www.mpich.org/downloads/ to get the latest stable release version for MPICH2]

15. Unpack them.

$ tar xfz mpich2-1.4.1p1.tar.gz

[Note: You will need to update this as the version of MPICH2 increments]

16. Make yourself a place to put the compiled stuff this will also make it easier to figure out what you have put in new on your system. Also you may end up building this a few times

$ sudo mkdir /home/rpimpi/

$ sudo mkdir /home/rpimpi/mpich2-install

[I just chose the rpimpi to replace the you in the Argonne guide and I did the directory creation in two steps]

17. Make a build directory (so we keep the source directory clean of build things)

mkdir /home/pi/mpich_build

18. Change to the BUILD directory

$ cd /home/pi/mpich_build

19. Now we are going to configure the build

$ sudo /home/pi/mpich2/mpich2-1.4.1p1/configure -prefix=/home/rpimpi/mpich2-install

[Note: You will need to update this as the version of MPICH2 increments]

Make a cup of tea

20. Make the files

$ sudo make

Make another cup of tea

21. Install the files

$ sudo make install

Make another cup of tea it will finish

22. Add the place that you put the install to your path

$ export PATH=$PATH:/home/rpimpi/mpich2-install/bin

Note to permanently put this on the path you will need to edit .profile

$nano ~/.profile

and add at the bottom these two lines:

# Add MPI to path

PATH="$PATH:/home/rpimpi/mpich2-install/bin"

23. Check whether things did install or not

$ which mpicc

$ which mpiexec

24. Change directory back to home and create somewhere to do your tests

$ cd ~

$ mkdir mpi_testing

$ cd mpi_testing

25. Now we can test whether MPI works for you on a single node

mpiexec -f machinefile -n hostname

where machinefile contains a list of IP addresses (in this case just one) for the machines

a) Get your IP address

$ ifconfig

b) Put this into a single file called machinefile

26. $ nano machinefile

Add this line:

192.168.1.161

[or whatever your IP address was]

27. If you use

Follow this link:

Raspberry Pi Supercomputer Guide Steps

Arthritic Dogs Healed With New Stem Cell Therapy – ABC News

A couple of years ago, Brad Perry's dogs started having joint problems. Cowboy, the golden retriever, developed a severe case of arthritis, while Mr. Jones, the mutt, tore the ligaments in both of his knees during some overenthusiastic play.

"It was so sad. They wouldn't even come to the door to greet me they were in so much pain. It just broke my heart," recalled Perry, a tractor-trailer driver from Alexandria, Ky.

Perry gave the dogs all sorts of medications, but nothing worked, and he knew such medications could result in kidney and liver damage. The dogs' suffering became so great, Perry considered putting the pets down. But late last year he heard about a veterinarian in his area who performed stem cell therapy on dogs to regenerate and repair their joints and figured it was worth a try.

Cowboy underwent the procedure first. Mr. Jones followed a few months later. Perry said that within 10 days of receiving treatment the dogs were like puppies again, chasing his kids, running around in the park and swimming in the lake.

The treatment Perry's dogs received was developed by MediVet America of Lexington, Ky., one of several companies that sell equipment and training to veterinary clinics around the world. MediVet has more than a thousand clinics. Participating vets have performed more than 10,000 stem cell procedures about 7,000 of them in the past 12 months.

An operation like the one Cowboy and Mr. Jones underwent takes several hours. To start, the vet harvests a few tablespoons of fat cells from the pet's abdomen or shoulder, then spins the cells in a centrifuge to separate out the stem cells that are naturally present in fat. Next, the cells are mixed with special enzymes to "digest" any residual fat and connective tissue, and are then "activated" by mixing them with "plasma rich platelets" extracted from the animal's blood cells. The mixture is stimulated under a LED light for 20 minutes or so to further concentrate the stem cells. Finally, the newly awakened cells are injected back into the damaged joint.

Jeremy Delk, MediVet's chief executive officer, said that the therapy works because stem cells are the only cells in the body that have the ability to transform themselves into other types of specialized cells -- such as cartilage -- making them a potent tool for repairing damaged and deteriorating joints. There are 50 to 1,000 times more stem cells in the fat than bone marrow, a source that was more consistently used in animal and human -- stem cell therapy until the fat method started becoming more popular.

"As we age, humans and animals alike, our stem cells are starting to die off so we have fewer. What we are able to do with these techniques is isolate the cells in very large numbers, wake them up and put them back into the area that needs help," he explained.

While still largely unavailable to their owners, stem cell therapy from fat cells has been offered to our furry friends for several years. With fewer regulatory hoops to jump through in veterinary medicine and no contentious religious debates, experimental procedures are often tested and perfected on animals decades before they're green-lighted for use on humans.

One of the things veterinarians and owners alike praise about the procedure is it can be completed in one day, and all at the vet's office. Stem cells can also be banked for future injection so the animal does not have to endure extraction again.

John Sector, the owner of Shelby St. Veterinarian Hospital in Florence, who performed the surgery on Cowboy and Mr. Jones, had high praise for the therapy.

"This is potentially a game changer. We're seeing incredible results in the joints. We also see some unexpected improvements in other things, like skin conditions," he said.

Stem cell therapy is not just for pets who curl up on couches or ride in the backseat either. Delk said horses, donkeys, zebras and lions are also regular stem cell patients. He and his team recently traveled to the Middle East to perform the therapy on some prized racing camels.

However, stem cell remedies, even for animals, are still considered experimental. Shila Nordone, the chief scientific officer at the AKC Canine Health Foundation, a nonprofit group that funds health research for dogs, said that its use for joint regenerative purposes is exciting, but that the lower regulatory bar in animal medicine is both good and bad.

"It's good because we can do things sooner for our patients without 10 years of expensive clinical trials, but bad because we are still in the process of establishing best practices to ensure the procedures are the safest and most effective possible," she said.

Studies funded by the Health Foundation and others have been promising. One study of more than 150 dogs found improvements in joint stiffness, mobility and other joint health indicators in nearly 95 percent of arthritic cases. In some patients, improvements were seen in as little as a week while others took up to 90 days and required multiple injections.

The cost of a single procedure is $1800-$3,000, depending on the area of the country, the species of animal and severity of joint damage. Even those with pet insurance can expect to pay out of pocket.

Owners like Perry believe it is worth every penny.

"They are completely different dogs. It absolutely changed their lives," he said of Cowboy and Mr. Jones. "It changed mine too -- I got my dogs back."

More:

Arthritic Dogs Healed With New Stem Cell Therapy - ABC News

Knee Stem Cell Therapy – Surgery & Replacement Alternative

Regenexx Knee Stem Cell Therapy for Injuries and ArthritisChris Centeno2015-10-23T08:56:19+00:00

The Regenexx family ofnon-surgical stem cell and blood platelet procedures offer next-generation injection treatments for those who are suffering from knee pain or may be facing knee surgery or knee replacement due to common tendon, ligament and bone injuries, arthritis and other degenerative conditions.

As an alternative to knee surgery or knee replacement, Regenexx procedures may help alleviate knee pain and the conditions that cause it with a same-day office injection procedure. Unlike traditional surgery, Regenexx patients are typically encouraged to walk the same day, and most patients experience very little down time from the procedure.

Knee Patient Results | Regenexx SD Procedure Overview | ACL Injuries | Meniscus Tears

This is not a complete list of conditions treated, but the most common knee conditions we have treated throughout the years. If you are experiencing knee pain, injury, or arthritis, please contact us or complete the candidacy form below to learn more about whether the Regenexx Procedures are right for you.

This Regenexx-SD (same-day) bone marrow derived stem cell treatment outcome data analysis is part of the Regenexx data download of patients who were tracked in the Regenexx advanced patient registry.

This Regenexx-SD (same-day) bone marrow derived stem cell treatment outcome data analysis is part of the Regenexx data download of patients who were tracked in the Regenexx advanced patient registry following treatment for Meniscus Tears.

This data utilizes LEFS (Lower Extremity Functional Scale) data from our knee arthritis patients treated with stem cell injections. Functional questionnaires ask the patients questions such as how well they can walk, run, climb stairs, etc. The improvements following the Regenexx-SD procedure are highly statistically significant.

If you are considering a knee replacement, watch the video in the sidebar of this page and read about how stem cells stack up against knee replacements.

BioMed Research International;Volume 2014, Article ID 370621,.Centeno CJ.

Introduction. We investigated the use of autologous bone marrow concentrate (BMC) with and without an adipose graft, fortreatment of knee osteoarthritis (OA). Methods. Treatment registry data for patients who underwent BMC procedures with andwithout an adipose graft were analyzed. Pre- and posttreatment outcomes of interest included the lower extremity functional scale(LEFS), the numerical pain scale (NPS), and a subjective percentage improvement rating. Multivariate analyses were performedto examine the effects of treatment type adjusting for potential confounding factors. The frequency and type of adverse events(AE) were also examined. Results. 840 procedures were performed, 616 without and 224 with adipose graft. The mean LEFS scoreincreased by 7.9 and 9.8 in the two groups (out of 80), respectively, and the mean NPS score decreased from 4 to 2.6 and from 4.3to 3 in the two groups, respectively. AE rates were 6% and 8.9% in the two groups, respectively. Although pre- and posttreatmentimprovements were statistically significant, the differences between the groups were not. Conclusion. BMC injections for knee OAshowed encouraging outcomes and a low rate of AEs. Addition of an adipose graft to the BMC did not provide a detectible benefitover BMC alone.

Two time Super Bowl Champ Jarvis Greens story. From a young boy struggling to get through a football practice, to a 2X Super Bowl Champion, Jarvis tells his story of pain and struggle following knee surgeries, and his return to form following a Regenexx Stem Cell Procedure.

If you are interested in learning whether you are a good candidate for the Regenexx Procedure, please complete the Regenexx Procedure Candidate Form below or call us at 888-525-3005.

Originally posted here:

Knee Stem Cell Therapy - Surgery & Replacement Alternative

Gene Therapy TV the Human Genetic Revolution

Cystic fibrosis (CF) is the most common, classic mendelian autosomal recessive, life-limiting disease among the white population.1,2 It is a multisystem disease that results from loss of function in the CF transmembrane conductance regulator (CFTR) gene, classically leading to respiratory tract, gastrointestinal (GI), pancreatic, and reproductive abnormalities.2 CF was recognized as a distinct clinical entity in 1938 and was believed to be invariably fatal during infancy.3

Since the 1970s, the life spans of CF patients have been prolonged, with advances in early diagnosis, care, and disease therapy. Early diagnosis has been improved by newborn screening. Advances in care include management of meconium ileus and improved methods of sputum clearance and managing respiratory failure. Improvements in disease therapy include better antibiotics, especially macrolides, and better pancreatic enzymes. With current management, almost 80% of patients with CF will reach adulthood; thus, CF is no longer a purely pediatric disease.4-6 For patients born in the 1990s, the median survival is predicted to be greater than 40 years.5 As more CF patients are surviving longer, adult issues including careers, relationships, and family are becoming important.6 A range of comorbid conditions that are more prevalent in adult CF patients are also being encountered with increasing frequency as this population matures, including osteoporosis, diabetes, joint diseases, malnutrition, severe lung disease with bronchiectasis, colonization by resistant pathogens, severe gastric reflux, chronic sinusitis, and periportal fibrosis.7

Delivery of health care to the CF patient is now relevant to the nonpediatric physician. In fact, the multifaceted needs of the adult CF patient have led to the development of a nationwide network of more than 83 adult CF care programs in conjunction with the Cystic Fibrosis Foundation.8 These comprehensive CF centers provide patients with a multidisciplinary approach based on the original pediatric CF centers. The aims of adult CF care include delivery of optimum care, access to pertinent medical resources, coordination of care among specialists and primary care providers, and a strong emphasis on independence and improving the quality of life of the patient who has CF.5 The physician is also faced with another challenge, in which the adult CF patient presents with atypical features that might have gone unrecognized. In this chapter, we cover the salient features of CF, including prevalence and the issues surrounding neonatal screening, pathophysiology, diagnosis, and new and emerging therapies for this complex multisystem disease.

CF is a genetic disease affecting approximately 30,000 children and adults in the United States. A defective gene causes the body to produce an abnormally thick, sticky mucus that leads to airway obstruction, subsequent life-threatening lung infections, end-stage lung disease, and bronchiectasis. These thick secretions also obstruct the pancreas, preventing digestive enzymes from reaching the intestines, leading to pancreatic insufficiency, malabsorption, and, in extreme cases, malnutrition.

Back to Top

CF is a disease that occurs predominantly in the white population, with a rate of one in 2500 live births. Two percent to 5% of whites are carriers of the CFTR gene mutation (having one normal and one abnormal gene) but have no overt clinical signs of disease. CF is not rare in African American populations, but it occurs at the much lower frequency of approximately one in 17,000 live births.9 In general, mutations of the CF gene are most prevalent in persons of northern and central European ancestries or of Ashkenazi Jewish descent, and they are rarely found in Native Americans, Asians, or native Africans.10 Although the prevalence of CF is lower in the African American population, the mean age at diagnosis is younger in black patients than in white patients. Overall, the clinical manifestations are similar in both racial groups except that black patients tend to have more severe GI issues, including poor nutritional status.10 There are more than 23,000 patients with CF in the United States.6

CF occurs equally often in male and female patients. In general, female patients with CF fare significantly worse than male patients. Female patients become infected with Pseudomonas aeruginosa earlier and have worse pulmonary function, worse nutritional status, and earlier mortality.11-13 A Cystic Fibrosis Registry analysis from the University of Wisconsin14 demonstrated that CF is diagnosed in girls at a later age than boys by at least 4 months, or even later when the analysis was limited to children presenting with only respiratory symptoms (40.7 months for diagnosis in girls vs. 22.3 months for diagnosis in boys). Implications for disease outcomes caused by delayed diagnosis of CF in girls may be present based on this recent analysis, but the reason for this delay is not clear or obvious.15

Back to Top

CF is an autosomal recessive trait caused by mutations at a single gene locus on the long arm of chromosome 7. The gene product cystic fibrosis transmembrane conductance regulator (CFTR) is a 1480-amino acid polypeptide.16,17 CF reflects the loss of function of the CFTR protein. The CFTR protein normally regulates the transport of electrolytes and chloride across epithelial cell membranes.18

More than 1000 mutations of the CFTR gene have been described.19 The most common mutations of CFTR can be classified into six groups based on their known functional consequences.20 This classification allows categorization of CFTR mutations based on molecular mechanisms, but phenotypic appearance depends on the type of mutation (class), location of the gene, molecular mechanism, and interaction with other mutations, as well as genetic and environmental influences.21

The most common mutation of the CFTR gene is caused by deletion of phenylalanine at position 508 (F508) and occurs with varying frequency in different ethnic groups.22 Worldwide, this allele is responsible for approximately 66% of all CF chromosomes.23

Back to Top

About 1000 infants are born with CF every year. CF is diagnosed in most of these children at a mean age of 3 to 4 years.24 Nearly 10% of CF patients receive their diagnosis when they are older than 18 years.

Newborn screening for CF has been instituted in eight states, but national screening plans have not been mandated. In all, CF is diagnosed in 10% of infants in the United States either by prenatal diagnosis (3%) or by newborn screening (7%).25 Newborn CF screening has been advocated by clinicians and CF groups as an early means of identifying asymptomatic patients so as to initiate early therapy to prevent long-term sequelae of the disease.26 The currently available genetic screening tools for CF include the Guthrie test, in which measurements of the immunoreactive trypsinogen in dried blood are taken, and measurement of the most common CF mutations, including F508.26 F508 is the most commonly reported gene mutation and is responsible for 70% of the mutated alleles in white patients. It is caused by a 3-bp deletion in the CFTR gene, resulting in the loss of the amino acid at position 508 of the CFTR protein. Homozygosity of this mutation is severe, resulting in both pulmonary and pancreatic disease.27

Recommendations for carrier screening or population screening have been proposed by the American College of Obstetricians and Gynecologists, the National Institutes of Health, and the American College of Medical Genetics; they are designed to identify at-risk couples before the birth of a child with CF.28 Screening should be offered to adults with a family history of CF, reproductive partners of persons with CF, and white (including Ashkenazi Jewish) patients who are planning pregnancy. Screening should be made available to persons of color.

The efficacy of CF screening program is based on a multitude of factors. One factor is identification of the CF carrier status of each partner, which helps to determine the risk to the fetus. Issues to keep in mind include the gestational age at which the couple presents for prenatal care and the feasibility of pregnancy termination. These factors should be included in the CF screening discussion with parents. The screening of couples can follow two approaches: The female partner is screened first, and if she tests positive for CF carrier status, then the male partner is tested; or both partners are screened concurrently to use time efficiently for decision making, especially if more than one recessive disorder is being considered. Important information to discuss with patients before screening include the aim of screening, the voluntary nature of screening, medical and genetic issues surrounding CF, the prevalence of CF, the interpretation of the test results, and individual values.29

Carrier screening neither detects all mutations that could be present nor estimates the residual risk (the chance that the patient still carries a copy of a CFTR mutation despite negative testing). CF is an autosomal recessive disorder, and persons with CF typically have inherited one mutated allele from each parent. It is very rare to inherit two mutated alleles from one parent and none from the other.29,30

For couples who have one child with CF or who are known to be carriers, prenatal diagnosis of CF is available through chorionic villus sampling in the first trimester or by amniocentesis in the second or third trimester. Some patients undergo prenatal testing to help in deciding to terminate or continue the pregnancy.

Back to Top

Signs and symptoms of CF are listed in Box 1.

Adapted from Welsh MJ, Tsui L-C, Boat TF, etal: Cystic fibrosis. In Scriver CR, Beaudet AL, Sly WS, etal (eds): The Metabolic and Molecular Basis of Inherited Disease. New York: McGraw-Hill, 1995, p 3801. 2005 The Cleveland Clinic Foundation.

Because the epithelial cells of an organ are affected by a variety of CFTR mutations, the consequences of the mutation vary depending on the organ involved. The pathologic changes differ in the secretory cells, sinuses, lungs, pancreas, liver, or reproductive tract. The hallmark of CF and the cause of death in more than 90% of patients is chronic pulmonary disease caused by bacterial and viral pathogens and leading to a host inflammatory response. The most profound changes occur in the lungs and airways, where chronic infections involve a limited number of organisms including P. aeruginosa, which is implicated most often, followed by Staphylococcus aureus, Haemophilus influenzae, and Stenotrophomonas maltophilia.6 Children with CF are first infected with Staphylococcus and Haemophilus species and later with Pseudomonas species.

Several theories have been proposed to explain the limited number of organisms involved in CF pulmonary infections, including the inflammation-first hypothesis,31 the infection-first hypothesis,32 the cell-receptor hypothesis,17 and the salt defensins hypothesis.33 The salt defensins hypothesis proposes that CF airway cells have properties similar to those of sweat glands that inactivate substances called defensins, leading to bacterial multiplication and infections. These theories, however, do not explain the presence of mucoid S. aureus or mucoid-type P. aeruginosa.

The isotonic fluid depletion and anoxic mucus theory proposes that water- and volume-depleted airway fluid leads to mucus viscosity, subsequent defective ciliary clearance, and a cough that is inadequate to clear the airways. Thus, bacteria in the CF lung are trapped within this viscous airway fluid and multiply within anaerobic growth conditions by changing from a nonmucoid to a mucoid type of organism.34-36 The transformation of these bacteria to a biofilm-encased form is a means of protection from normal host defenses and antibiotics, making eradication difficult.37 A neutrophil-dominated airway inflammation is certainly present in CF lung disease, even in clinically stable patients.31,38

It seems that early pediatric colonization with either P. aeruginosa or S. aureus has a significant impact on CF lung disease in adulthood. Another organism unique to CF with a significant impact on adult CF lung disease is Burkholderia cepacia. Earlier, this organism was uniformly associated with poor clinical outcomes, but now it is recognized that outcomes might depend on the actual genotype of the organism.39

Clinically, CF pulmonary exacerbations are manifested as an increase in respiratory symptoms including cough and sputum production, with associated systemic symptoms that include malaise and anorexia.40 Patients rarely have fever and leukocytosis, and in most cases radiographic changes are minimal during an exacerbation.9 An exacerbation can be documented by a decrease in pulmonary function, which usually returns to normal after the acute exacerbation resolves. As the lung disease progresses, bronchiolitis and bronchitis become evident, with bronchiectasis as a consequence of the persistent obstruction-infection insult. Overall, bronchiectasis in CF is more severe in the upper lobes than in the lower lobes. Pathologic examinations have demonstrated bronchiectatic cysts in more than 50% of end-stage CF lung on autopsy studies.41 Subpleural cysts often occur in the upper lobes and can contribute to the frequent occurrence of pneumothorax in patients with late-stage CF. The reported incidence of spontaneous pneumothorax in CF ranges between 2.8% and 18.9%.42 The patient with spontaneous pneumothorax usually presents with acute onset of chest pain or dyspnea. In one study, chest pain was the manifesting symptom in more than 50% of patients. Dyspnea occurred in more than 65% of patients.43 In the same study, hemoptysis was present in 19% of patients, probably as a result of bronchial artery enlargement, and subsequent tortuosity within ectatic airways made vessels delicate and more prone to bleed.44

Children without a prior, established diagnosis of CF often present with cough and upper respiratory tract infections that persist longer than expected. Patients whose CF is diagnosed when they are older often do not have the underlying pancreatic insufficiency that is typical of the younger patient with classic CF. Patients with CF diagnosed in adulthood usually present with chronic respiratory infections, but these are usually milder and less likely to be pseudomonal.42

Several interstitial lung diseases have been described during autopsy of the CF lung, including the usual interstitial pneumonitis, bronchiolitis obliterans organizing pneumonia, and diffuse alveolar damage.45 The upper respiratory tract is also involved in CF, most patients suffer from acute and chronic sinusitis caused by hypertrophy and hyperplasia of the secretory components of the sinus tract.46 Another common feature is the presence of pedunculated nasal polyps.47 Sleep-disordered breathing and nocturnal hypoxia, mainly during rapid-eye-movement (REM) sleep and hypoventilation, have also been described in CF patients.48

GI symptoms in CF manifest early and continue throughout the life span of a CF patient. Because of defects in CFTR, meconium ileus can occur at birth, and distal intestinal obstruction syndrome (the meconium ileus equivalent) occurs in 40% of older CF patients. The distal intestinal obstruction syndrome has been associated with inadequate use of pancreatic enzyme and dietary indiscretion without appropriate use of pancreatic enzyme.9 CF patients with obstruction can present with abdominal pain and often a palpable mass in the right lower quadrant on physical examination. Associated symptoms include anorexia, nausea, vomiting, and obstipation. With more frequent events, adhesions can develop due to inflammation, leading to a mechanically dysfunctional intestine that can eventually require surgical resection.

As a result of the CFTR defect, the biliary ducts can become plugged and clogged, leading to liver involvement and biliary cirrhosis in 25% of patients with CF. Hepatic steatosis can result from malnutrition, and congestion can result from hypoxia-induced cor pulmonale.2 Symptomatic liver disease with the sequelae of cirrhosis, including esophageal varices, is uncommon. Fecal loss of bile acids is increased in CF, leading to a reduction in the bile salt pool and a propensity for cholelithiasis. Approximately 30% of adult CF patients present with a hypoplastic, poorly functioning gallbladder, and about one third of that population develops gallstones.49,50

About 90% of patients with CF have pancreatic insufficiency. It is believed to be related to reduced volumes of pancreatic secretions and reduced concentrations of bicarbonate excretion. As a result, digestive proenzymes are retained when the pancreatic duct is blocked, leading to organ tissue destruction and fibrosis. Lipids and fat-soluble vitamins (D, E, K, and A) are therefore malabsorbed, and the malabsorption can eventually lead to a hypermetabolic state and increased endobronchial infections because of an inverse relation between metabolic states and lung function in CF patients.51 Patients with no evidence of pancreatic insufficiency usually manifest milder disease and are less likely to have the F508 mutation.9

CF-related diabetes usually develops after the second decade of life and rarely before the age of 10 years, due to sparing of Langerhans cells. Over time, pancreatic destruction and fibrosis occur, caused by obstruction of the pancreatic ducts and later leading to amyloid deposition, and diabetes ensues.52,53 Patients with CF-related diabetes experience more severe lung disease and nutritional deficiencies than CF patients without diabetes. Bone disease, including osteoporosis and osteopenia, is multifactorial in CF because of malnutrition, cytokines, and hormonal disorders in androgen (hypogonadism) and estrogen production and because of glucocorticoid therapy.54

Now that many more CF patients are surviving into their 40s, issues of family and children have gained more attention. Most male CF patients are infertile because of aspermia secondary to atretic or bilateral absence of the vas deferens or seminal vesicle abnormalities.55 It is believed that during fetal life, the vas deferens becomes plugged with mucoid secretions and subsequently gets reabsorbed. Libido and sexual performance are not affected. Artificial insemination may be used for couples desiring offspring by obtaining microscopic epididymal sperm sampling. Female CF patients usually have normal reproductive tracts, although the cervical mucus may be tenacious as a result of CFTR mutation, thus blocking the cervical canal and possibly interfering with fertility. Overall, women with CF are not as infertile as their male counterparts, and birth control must be discussed with female patients reaching sexual maturity.56 The endometrium and fallopian tubes contain very small amounts of CFTR and usually remain normal.57 Onset of menarche is usually normal except in girls who are severely ill and undernourished.

Since the 1960s, the prognosis for CF and pregnancy has improved greatly. Maternal deaths usually occur in women with the most severe lung disease. It appears from multiple case studies that the decline of lung function and the absolute value of the FEV1 may be more important in determining fetal outcome.58,59 One study, by Canny and colleagues, recommended an FEV1 of greater than 70% as a requirement for a successful pregnancy outcome.59 Normal lung function leads to a normal pregnancy. Pulmonary status can worsen in women with poor lung function during pregnancy, but this is still debated. Termination of pregnancy has been recommended if the FEV1 is less than 50%; however, reports do exist of successful pregnancies with low FEV1.60 Extremes of low body weight have resulted in terminations and premature deliveries and may be a relative contraindication.61 In terms of infant health, it should be kept in mind that all infants will be carriers of a maternal gene for CF. Case reports have reported fetal anomalies caused either by treatment, by maternal complications, or by chance itself.57

Vaginal yeast infections and urinary incontinence have now become major issues in female CF patients as they mature. Many patients have persistent yeast infections as a result of frequent antibiotic therapy. Suppression of cough in an attempt to prevent urinary leak can prevent women from aggressively continuing chest physiotherapy.62,63

During the great summer heat wave of 1939 it was discovered that patients with CF were especially susceptible to heat prostration and associated cardiovascular collapse and death after initial symptoms. This sweat defect was discovered by Di SantAgnese and eventually led to the modern day sweat test used in the diagnosis of CF. In the sweat duct, CFTR is the only channel by which chloride can be reabsorbed from sweat.63,64

Back to Top

In 1998, the Cystic Fibrosis Foundation issued a consensus statement regarding the diagnosis of CF.1 According to the panel, the diagnosis of CF should be made on the basis of one or more characteristic phenotypic features: history of a CF sibling, presence of a positive newborn screening test, and laboratory confirmation of a CFTR abnormality by an abnormal sweat chloride test, identification of mutations in a gene known to cause CF, or in vivo demonstration of an ion transport abnormality across the nasal epithelium (Figure 1). However, if these classic criteria as described by the committee are not present, CF still cannot be ruled out in its entirety. In patients who present later in childhood or in early adulthood, these classic criteria might not be present. In these patients, typical pulmonary symptoms or GI symptoms may be absent, and instead pancreatitis, male infertility, or sinusitis or nasal polyps may be present.18

Sweat testing, in which a minimally acceptable volume or weight of sweat (50mg) must be collected during a 30-minute period to ensure an average sweat rate of 1g/m2 per minute, using the Gibson and Cooke method.63,64 A sweat chloride reading of more than 60mmol/L on repeated analysis is consistent with a diagnosis of CF but must be interpreted in the context of the patients history, clinical presentation, and age.1 Approximately 5% of patients with CF have normal sweat test results.7 A negative sweat test does not rule out the possibility of CF in the presence of appropriate symptoms and clinical signs (pancreatitis, sinus disease, and azoospermia) and should be repeated. False positives can result for many reasons, but poor technique and patient nutritional status, including anorexia, can yield false results.

Nasal potential measurements measure the voltage difference and correlate with the movement of sodium across the cell membrane. In CF, the CFTR mutation renders this physiologic function abnormal, leading to a large drop in the potential in patients with CF. The presence of nasal polyps or irritated nasal mucosa can yield a false-negative result. Overall, testing using this method is complicated and time consuming.65

Because of the more than 1000 CFTR mutations associated with CF, commercially available probes test only for a limited number of mutations, which constitute more than 90% of the most common mutations known to cause CF but which can vary from region to region. A mutation can be found in most symptomatic patients, but in a small percentage the mutation can be absent.66 Therefore, clinical manifestations or family history are important to the diagnosis. If an abnormality does exist, the combination of two CF mutations plus an abnormal sweat chloride test is accepted for diagnosis. Mutation analysis can be used not only to confirm diagnosis but also to provide genetic information for family members, predict certain phenotypic features, and possibly help in allocating patients for research trials.

In patients with atypical features, a number of clinical and radiologic tests may be performed to assess for a CF phenotype, including assessment of respiratory tract microbiology, chest radiographs, computed tomography of the chest, sinus evaluation, genital tract evaluation, semen analysis, and pancreatic functional assessment. The hallmarks of CF are pancreatic insufficiency and malabsorption, which can lend themselves to laboratory examination such as measurement of serum trypsinogen or pancreas-specific elastase, and fecal fat analysis or reduced fecal concentration of chymotrypsin.67,68 In addition, pansinusitis is so common in CF patients and generally uncommon in non-CF children that the presence of this entity on examination and sinus radiographs should prompt a suspicion of CF.69 In a male patient with obstructive azoospermia confirmed with testicular biopsy, CF should be strongly considered, although other diseases, such as Youngs syndrome, can cause pulmonary disease and azoospermia.70

Airway inflammation, even in the absence of active infection, is present in young and older patients with CF. Therefore, bronchoalveolar lavage (BAL) can show a predominance of neutrophils in patients with CF. In atypical presentations, with no evidence of pulmonary disease, a BAL with evidence of a high neutrophil count can provide further support for the diagnosis of CF in the presence of azoospermia or pancreatic disease.47 Isolation of the mucoid type of P. aeruginosa by BAL or sputum analysis, oropharyngeal swab, or sinus culture is highly suggestive of CF.1

Back to Top

The cure for CF is to restore the function of CFTR. This has been attempted with in vivo gene therapy in CF patients using adenoviral vectors and cationic liposome transfer, although lasting physiologic effects have not been noted.71,72 Although it is still far from being a standard treatment, gene therapy for CF has been making significant strides.

Protein modification is based on the concept that the abnormal CFTR protein can be taught to transport water and electrolytes. The CFTR F508 protein mutation is the most common mutation responsible for CF. This abnormal mutation is recognized by the endoplasmic reticulum and degraded rather than glycosylated and transported to the cell surface. Aminoglycosides, including gentamicin, allow few of the CFTR mutations to reach the respiratory epithelial cells in patients with CF. Other compounds, including phenylbutyrate, phenybutyrate, and genistein, have been tested to act as similar chaperones to the CFTR mutation.73-76

Another ongoing approach includes gene transfer, in which both endogenous stem cells in the lung and mouse-derived cells have been noted to transform into airway and epithelial cells after systemic adminstration.77

Since the early 1990s, the Cystic Fibrosis Foundation has developed guidelines to help guide the care of patients with this complex disease (Table 1).1

Adapted from Cystic Fibrosis Foundation: Cystic Fibrosis Foundation Patient Registry Annual Data Report 2002. Bethesda, Md, Cystic Fibrosis Foundation, 2003.

Respiratory disease is the major cause of mortality and morbidity in CF. All patients with CF should be monitored for changes in respiratory disease. A persistent cough in a CF patient is not normal, and the cause should be aggressively pursued.

Spirometry is a useful tool for monitoring pulmonary status. Initial lung function in most CF patients is normal. Later, the small peripheral airways become obstructed, leading to changes on spirometry at low lung volumes. Later still, decreased flow occurs at larger lung volumes. CF usually produces an obstructive pattern on spirometry, but a restrictive pattern can indicate substantial gas trapping. In general, a 10% decrease in FEV1 is considered a sign of worsening lung function and possibly a sign of a respiratory infection.78 Patients with an FEV1 of less than 30% of predicted are at higher risks for nocturnal hypoxia and hypercapnia and should be evaluated for nocturnal desaturation.

Oxygen saturation should be monitored routinely to assess the need for supplemental oxygen in patients with moderate to severe disease. Structural changes can also be noted using radiographic studies. Annual chest radiographs are recommended for unstable CF patients and may be useful in documenting the progression of disease or response to treatment. In patients with stable clinical states, chest radiographs should be performed every 2 to 4 years instead of annually. If bronchiectasis is suspected, high-resolution computed tomography is indicated (see Fig. 1).78

Inhaled bronchodilators, specifically agonists, can be administered by nebulizer, metered-dose inhaler, or oral inhaler in CF patients with a documented drop in FEV1 by 12% or 200mL, indicating bronchodilator response in the effort to treat airway hyperreactivity.79 Few studies show significant improvement in clinical pulmonary function with routine use of bronchodilator therapy. Long-term use of agonists should be approached with caution, because animal studies have shown submucosal gland hypertrophy and a possible hypersecretory state with prolonged use, although no human studies have duplicated this finding.80 Salmeterol, a long-acting agonist, is effective in decreasing nocturnal hypoxia in patients with CF.81 Hypertonic saline, either a 6% or a 3% solution, has been shown to reduce sputum viscoelasticity and to increase cough clearance in CF patients.82

Dornase alfa (recombinant human deoxyribonuclease I; Pulmozyme) in addition to hypertonic saline is believed to improve mucociliary clearance by hydrolyzing extracellular DNA, which is present at high levels in CF patients. Improved lung function has been noted with the use of this drug. In a multicenter placebo-controlled study, patients treated with dornase alfa had a 12.4% improvement in FEV1 above baseline and a 2.1% increase compared with those receiving placebo (P

Airway clearance techniques should be routinely performed on a daily basis by all CF patients86 before eating, and usually bronchodilators are used during or before airway clearance treatment. Inhaled corticosteroids and antibiotics should usually be reserved until the airway clearance technique is completed so that airways have fewer secretions, allowing greater penetration of medications. In selecting a particular treatment, the patients age, preference, and lifestyle should be taken into account, because no one technique is superior.

Chest physiotherapy consisting of chest percussion and postural drainage (chest clapping) is the primary method of secretion clearance. The patient is usually positioned so that gravity assists in draining mucus from areas of the lung while avoiding the head-down position. Using cupped hands or a clapping device, the chest wall is vibrated or percussed to clear mucus. The therapy can be used on patients of all ages and can be concentrated in certain areas of the lungs that need more attention. Usually, an additional caregiver is needed to provide this treatment, but patients who are independent may be able to perform their own percussion on the front and sides of the chest.87 Assisting the cough of a CF patient through external application of pressure to the epigastric or thoracic cage can assist in the clearance.87

A forced exhalation, or huff, during mid or low lung volumes can improve mucus clearance. A technique called forced expiration consists of two huffs followed by relaxed breathing. Unlike postural drainage, the active cycle of breathing treatment improves lung function without decreasing oxygenation and does not need an assistant.88 This airway clearance technique is a combination of breathing control, thoracic expansion, and the forced expiration technique. It improves oxygen delivery to the alveoli and distal airways and promotes clearance of mucus to the proximal airways, to be cleared by huffing.89

Autogenic drainage is a method of breathing performed at three different lung volumes to augment airflow in the different divisions of the airways. Air needs to be moved in rapidly to unstick mucus and avoid airway collapse. No desaturations occur during this technique, but it does require concentration and might not be appropriate for young CF patients.88

The application of positive expiratory pressure (PEP) by mechanical ventilation or by intermittent positive pressure breathing devices can assist in airway collapse in CF. Bronchiectasis resulting in wall weakness can lead to collapse and retained secretions. Low-pressure PEP, high-pressure PEP, and oscillation PEP are three methods to help reduce airway collapse, all using a device that provides expiratory lengthening and manometric measurements at the mouth.87 Oscillating PEP can enhance clearance of secretions in a way that is relatively easy for the patient. It is low cost, and it is easily movable.90

High-frequency chest wall compression is performed using a compression vest that allows therapy to large chest-wall areas simultaneously. No assistance is needed with this therapy, and it may be ideal for the independent CF patient.91

Intrapulmonary percussive ventilation provides frequent, small, low-pressure breaths to the airways in an oscillatory manner. This method is limited by its high cost and lack of portability, but unlike some other devices it can be used to deliver medications.78

The effect of exercise in CF is not clear. Whether it enhances mucus clearance is debatable, but quality of life improves and there is a lower mortality rate among CF patients who exercise regularly.78 Regular exercise enhances cardiovascular fitness, improves functional capacity, and improves quality of life; therefore, exercise should be advocated strongly in the adult CF patient.5

Some of the contraindications to airway therapy include poorly controlled reflux disease, massive hemoptysis, and the presence of an untreated pneumothorax.

Improved antibiotics against bacterial infections, especially P. aeruginosa, have resulted in an increased life span for the CF patient. The aim of CF therapy should be prevention of bacterial lung infections. Environmental hygiene measures, including cohorting patients according to infection status, can limit cross-reaction.92 The most important bacterial organisms in CF are S. aureus, P. aeruginosa, and B. cepacia, but others have also emerged including S. maltophilia, Achromobacter xylosoxidans, and nontuberculous bacteria.93 Intravenous antibiotics are the mainstay of therapy for acute exacerbations. The choice of antibiotic is difficult in CF because of resistance patterns; therefore, the choice should be based on the most recent sensitivities of the surveillance sputum cultures. If a recent culture is not available, antibiotic coverage should include treatment for both Staphylococcus and Pseudomonas species. Most centers typically choose a third-generation cephalosporin and an aminoglycoside, given for 2 to 3 weeks intravenously at higher doses because of the volume of distribution in CF patients.

Inhaled antibiotic aerosols can effectively minimize toxicity and allow certain aminoglycosides to be administered at ome. Limiting factors include cost, taste, and distribution in severe disease and acute exacerbations.9 Many CF centers have adopted the Copenhagen Protocol in dealing with infection when, with the first isolation of Pseudomonas species, oral ciprofloxacin and inhaled colistin are started, with intravenous antibiotics given every 4 months to prevent reinfection. Cohorting and environmental and nutritional issues are monitored as well, leading to a significant reduction of chronic infection with Pseudomonas species and better pulmonary function.76

Several large randomized studies have demonstrated a benefit of macrolides in CF patients. The results of these investigations seem to indicate that the immunomodulatory effect of these medications and not the antibacterial effect is responsible for the outcomes of the medication. Experts have suggested using macrolides for 6 months (azithromycin or clarithromycin) in CF children or in adults not improving on conventional therapy.94 Azithromycin has been shown to be highly effective in improving pulmonary function over a 6-month period in CF patients homozygous for F508 and not receiving dornase alfa.95

In patients with allergic bronchopulmonary aspergillosis or asthma, oral corticosteroids can be used. Although alternate-day steroids have been used in the past for CF exacerbations to reduce airway inflammation, experts agree that this method should be used more cautiously. Ibuprofen has been used as an anti-inflammatory agent, and in one trial lung function declined more slowly in ibuprofen users.96 Other therapies currently undergoing trials include surfactant to reduce sputum adhesiveness, gelsolin to sever F-actin bonds in sputum (thus reducing the tenacity of sputum), and thymosin B 4 to improve sputum transport.76

In advanced lung disease resulting from CF, the options for treatment are limited. Lung transplantation is the only effective therapeutic option not only to prolong survival (1 year survival >80%; 5-year survival, 60%97 ) but also to improve quality of life. The International Lung Transplant Committee issued guidelines in 1998 for the selection of lung transplantation candidates.98 Based on these criteria, CF patients should be referred for transplantation when the FEV1 is less than 30% of predicted, if hypoxia or hypercapnia is present, if hospitalizations increase in frequency, or if hemoptysis or cachexia is an issue (Box 2). Early in the history of lung transplantation, CF patients colonized with B. cepacia were not candidates for transplantation, but recent advances in careful, specific taxonomic testing of B. cepacia have allowed this patient population to be eligible for transplantation at many centers, including our own.99

Note: Young female patients should be referred earlier due to overall poor prognosis. Adapted from Boehler A: Update on cystic fibrosis selected aspects related to lung transplantation. Swiss Med Wkly 2003;133:111-117.

Severe liver disease, including portal hypertension, is present in 3% of the CF population. In this population, combined liver and lung transplantation should be considered. Overall survival in combined liver and lung transplantation is 64% at 1 year and 56% after 5 years.100 Patients with severe cachexia and a low body mass index (

Pleural adhesion and previous pleurodesis are not contraindications to transplantation. If pleurodesis is indicated, we recommend that it be performed in conjunction with a transplantation center to minimize any complications that can occur at the time of transplantation.

Unstable CF patients requiring mechanical ventilation are not candidates for lung transplantation at any transplant center. Meyers and colleagues reported 1-year outcomes in stable, mechanically ventilated patients who underwent transplantation.102 Currently, only a limited number of centers perform lung transplantation in ventilator-dependent patients.

Recent attention has focused on living lobar transplantation, which involves the removal of a lower lobe from each of two donors and subsequent transplant into a child or small adult.103 Short-term outcomes have been comparable with those using cadaveric transplants. This procedure involves three patients and thus a possible increase in the potential morbidity and mortality, although no donor deaths have been reported.104

For more information on identifying which patients are more likely to benefit from receiving a lung transplant, contact the Cleveland Clinic Foundation Lung Transplant Center or the Cystic Fibrosis Foundations website. More than 1400 people have received lung transplants since 1988.6

CF patients should eat a well-balanced diet (a standard North American diet with 35%-40% fat calories) without fat restriction, always given with enteric-coated pancreatic enzymes. Anthropomorphic measurements should be made every 3 to 4 months, and CF patients should be educated regarding their ideal body weight range. Annual complete blood cell count, albumin, retinol, and tocopherol measurements are recommended. Pancreatic enzymes should be given with each meal and snack, along with vitamin A 10,000IU/day, vitamin E 200-400IU/day, vitamin D 400-800IU/day with adequate sunlight exposure, and vitamin K 2.5 to 5.0mg/week. If the body mass index decreases, enteral feeding should be considered through gastrostomy tubes or jejunostomy tubes.

For CF patients with partial obstructions or distal intestinal obstructive syndrome, early recognition is vital to avoid surgical intervention. In addition, aggressive hydration, addition of pancreatic enzymes, H2 blockers, and agents to thin bowel contents (including the radiographic contrast solution diatrizoate) may be used. Complete obstructions should be treated with enemas, oral mineral oil, and oral polyethylene glycol-3350 solutions.9

Back to Top

Overall, the life expectancy in CF has risen since the 1980s. Recent figures show the median age of survival increased by 14 years in 2000 compared with figures from 1980; the predicted mean survival age was 31.6 years in 2000.6 In 1990, 30% of patients in the CF Registry were older than 18 years. This has continued to rise: 40.2% of patients in 2002 were older than 18 years. Although overall survival rates have improved, female patients have had consistently poorer survival rates than male CF patients in the age range from 2 to 20 years. It is not clear why this is the case.105

Lung function predictions over time are difficult to estimate, but CF patients often have extended periods of stabilized lung function that can last for 5 years or more. Most patients have full-time or part-time jobs, and many are married and have children. In the patient registry,6 more than 185 women who had CF were pregnant in 2002.8

Many patients have normal life spans, and end-of-life options need to be addressed with patients and their families. Advance-care planning should be done early in the disease course. The goal of advance-care planning is to respect the patients wishes.5

Back to Top

Back to Top

Back to Top

See more here: Cystic Fibrosis Cleveland Clinic

Read more here:

Gene Therapy TV the Human Genetic Revolution

What Are the Uses of a Supercomputer? | eHow

Follow

Today's supercomputers can not only perform calculation after calculation with blazing speed, they process vast amounts of data in parallel by distributing computing chores to thousands of CPUs. Supercomputers are found at work in research facilities, government agencies and businesses performing mathematical calculations as well as collecting, collating, categorizing and analyzing data.

Your local weatherman bases his forecasts on data supplied by supercomputers run by NOAA, or the National Oceanic and Atmospheric Administration. NOAA's systems perform database operations, mathematical and statistical calculations on huge amounts of data gathered from across the nation and around the world. The processing power of supercomputers help climatologists predict, not only the likelihood of rain in your neighborhood, but also the paths of hurricanes and the probability of tornado strikes.

Like the weather, scientific research depends upon the number-crunching ability of supercomputers. For example, astronomers at NASA analyze data streaming from satellites orbiting earth, ground-based optical and radio telescopes and probes exploring the solar system. Researchers at the European Organization for Nuclear Research, or CERN, found the Higgs-Boson particle by analyzing the massive amounts of data generated by the Large Hadron Collider.

National Security Agency and similar government intelligence agencies all over the world use supercomputers to monitor communications between private citizens, or from suspected terrorist organizations and potentially hostile governments. The NSA needs the numerical processing power of supercomputers to keep ahead of increasingly sophisticated encryption of Internet, cell phone, email and satellite transmissions -- as well as old-fashioned radio communications. In addition, the NSA uses supercomputers to find patterns in both written and spoken communication that might alert officials to potential threats or suspicious activity.

Some supercomputers are needed to extract information from raw data gathered from data farms on the ground or in the cloud. For example, businesses can analyze data collected from their cash registers to help control inventory or spot market trends. Life Insurance companies use supercomputers to minimize their actuarial risks. Likewise, companies that provide health insurance reduce costs and customer premiums by using supercomputers to statistically analyze the benefits of different treatment options.

Mainframe computers, created in the early 1940s, initially were bulky machines that required cooling-sensitive rooms. The 1951 UNIVAC was the size of...

"Microcomputer" is the term coined in the 1970s for a personal computer. Until that point, computers had been bulky room-sized electronics; even...

Even if your organization has researched the benefits and advantages of using a supercomputer to tackle tough and complicated problems, you will...

Computers are used for a variety of applications---from scientific data recording to engineering to ... Supercomputers provide the fastest processing speed of...

There are four categories of compters on which most computer scientists agree: the supercomputer, the mainframe, the minicomputer and the microcomputer. As...

Go here to read the rest:

What Are the Uses of a Supercomputer? | eHow

Best Places to Live in Secaucus, New Jersey

New York is the largest metro area in the United States. It includes the island of Manhattan, an eight-county area immediately north, western Long Island, and Staten Island. It is the fourth largest in the world behind Tokyo, Mexico City, and Sao Paulo, Brazil. Regardless of how the area is defined, New York is among the richest and most complex places to live in America.

Boroughs, districts, and neighborhoods define the city. The borough of Manhattan, a 10-mile-long, 2-mile-wide island, is the financial, commercial, and entertainment core. Much of Lower Manhattan consists of narrow, haphazard streets, dating back to the citys earliest days as a Dutch colony. With the exception of older areas, such as Greenwich Village, the rest of the city follows an orderly grid pattern of avenues and streets laid out in 1811. (Broadway, another exception, moves at a gentle diagonal across the city.)

Filling out the island are distinct districts. Lower Manhattan contains the Financial District. Midtown is the commercial center, with corporate headquarters, various media businesses, and world-class shopping along Fifth Avenue. Large skyscrapers dominate Lower Manhattan, then retreat as does hard bedrock to build on in those areas, then reemerges in Midtown. The in-between area is dominated by older ethnic enclaves like Chinatown and Koreatown and the more famous artsy areas of Greenwich and Soho.

Hip residential areas lie east and west, mainly popular with young single professionals. North and west is Hells Kitchen, in the 40s (most Manhattan area locations are so approximated by their east-west numbered streets) is an old ethnic area and warehouse district enjoying a residential renaissance, to soon be aided by an elevated bikeway and commercial corridor along an old rail line. Times Square and the Theater District just west of Midtown contain the world-famous theaters and numerous restaurants. Surrounding Central Park, the Upper West and Upper East sides are predominantly residential, although both contain ample dining and shopping. The Upper East Side also contains posh enclaves unaffordable for most, outstanding museums, and the designer boutiques of Madison Avenue. The Upper West Side is dotted with large apartment buildings and is a favorite for working professionals and families. Farther north above Central Park, neighborhoods start to decline, although Harlem is undergoing a rebirth.

The boroughs of Brooklyn, Queens, and the Bronx are a patchwork of residential and commercial areas and parks. They have large industrial areas with a predominant blue-collar feel containing manufacturing and freight distribution centers for the area. All are close to the city and offer relatively more living space, and all are experiencing verying degrees of economic and residential revival. Ethnic diversity is strong in all boroughs, while Queens is reputedly the most ethnically diverse area in the country.

Brooklyn is large and diverse enough to function as a standalone city, with large and some upscale residential areas with a modern downtown and substantial commercial and retail offerings areas. Brooklyn is known for its large Olmstead designed (of Central Park fame) Prospect Park. Brooklyn shares the western end of Long Island with Queens, with excellent transportation service into the city by rail and subway and numerous beaches, parks and residential neighborhoods south and east towards the large JFK airport. Brooklyn is socioeconomically very diverse, with a mix of upscale, middle class and poorer areas, while Queens is more clearly identifiable as middle class.

The Bronx area, on the mainland to the north of Manhattan, is the grittiest of the three areas, although its strategic location between the city and to better areas north is starting to bring some interest. Staten Island, a mainly-residential borough to the south, is connected to Manhattan by ferries and the Verrazano Narrows bridge.

Finally, the New York metro area includes northern suburbs stretching up into Westchester County between the east bank of the Hudson River and the Connecticut border. Westchester is generally upscale and expensive, with spread-out towns and a country setting. White Plains is the largest city and a modern corporate center with large facilities for IBM and a number of companies relocating north from Manhattan. Smaller but very upscale areas lie east along the Long Island Sound (Rye being an example) and north along the Hudson as the smaller towns of Tarrytown, Ossining and Croton-on-Hudson.

Rockland County is more middle class with some working-class areas. West Nyack is a large family-oriented middle class area. Other suburbs give workers access to New York by freeway or by rail lines across the Hudson or to northern New Jersey.

The New York area offers a rich assortment of amenities, with world-class dining, shopping, and performing arts including theater, symphony, opera, and live music. Museums and architectural attractions, large and small, draw global audiences. Numerous major-league teams play in the area, including the MLB Yankees and Mets, NBA Knicks, NFL Giants and Jets, and NHL Islanders and Rangers. An extensive public transit system with subways and buses serves the urban core and links the boroughs.

A suburban rail and ferry network services surrounding communities in Connecticut, Long Island, and New Jersey. Rail lines on the Northeast Corridor make such cities as Boston and Washington, D.C. easily accessible. Many residents dont own cars and choose to depend on public transit or an occasional car rental. Three major airportsLa Guardia, Kennedy, and nearby Newarkprovide air service domestically and abroad. Surrounding the city are numerous recreation areas: Long Island beaches, the Poconos, the Hudson Valley, and the Jersey Shore, to name only a few.

The downsides are significant. The city is crowded and stressful, and some neighborhoods are run down. Violent crime rates are high, although not as bad as the stereotype. Cost of living is high in all categories and is rising. Median home prices of half a million or more dont buy much, especially in Manhattan. Home prices there can be five to six times higher for comparable properties in surrounding boroughs. Income differentials between wealthy workers and others are high, and overall the Buying Power Index is usually the worst in the country, suggesting that incomes dont keep up with costs. New York is a great place if you like the lifestyle and can make ends meet.

The New York City area exceeds 300 square miles and is located mostly on islands. Elevations range from less than 50 feet over most of Manhattan, Brooklyn, and Queens to several hundred feet in northern Manhattan, the Bronx, and Staten Island. The area is close to storm tracks, and most weather approaches from the west- producing higher summer and lower winter temperatures than would otherwise be expected in a coastal area. Summers are hot and humid with occasional long periods of discomfort. Sea breezes occasionally moderate summer heat and winter cold in Lower Manhattan. Manhattan and the inner boroughs are more likely to receive rain in winter while outlying areas get snow. Precipitation is distributed fairly evenly throughout the year. Summer rainfall is mainly from thunderstorms, usually of brief duration. Late summer and fall rains associated with tropical storms may occur. Coastal noreaster storms can produce significant snow. First freeze is mid-November, last is early April.

See the article here:

Best Places to Live in Secaucus, New Jersey