About Cystic Fibrosis | CF Foundation

Watch a video that provides a glimpse into the everyday life of Kaitlyn Broadhurst, a 25-year-old living with cystic fibrosis.

People with cystic fibrosis are at greater risk of getting lung infections because thick, sticky mucus builds up in their lungs, allowing germs to thrive and multiply. Lung infections, caused mostly by bacteria, are a serious and chronic problem for many people living with the disease. Minimizing contact with germs is a top concern for people with CF.

The buildup of mucus in the pancreas can also stop the absorption of food and key nutrients, resulting in malnutrition and poor growth. In the liver, the thick mucus can block the bile duct, causing liver disease. In men, CF can affect their ability to have children.

Breakthrough treatments have added years to the lives of people with cystic fibrosis. Today the median predicted survival age is close to 40. This is a dramatic improvement from the 1950s, when a child with CF rarely lived long enough to attend elementary school.

Because of tremendous advancements in research and care, many people with CF are living long enough to realize their dreams of attending college, pursuing careers, getting married, and having kids.

While there has been significant progress in treating this disease, there is still no cure and too many lives are cut far too short.

Read more from the original source:

About Cystic Fibrosis | CF Foundation

Posted in Cf

Cystic fibrosis – Wikipedia, the free encyclopedia

Cystic fibrosis (CF) is a genetic disorder that affects mostly the lungs, but also the pancreas, liver, kidneys, and intestine.[1][5] Long-term issues include difficulty breathing and coughing up mucus as a result of frequent lung infections.[1] Other signs and symptoms may include sinus infections, poor growth, fatty stool, clubbing of the fingers and toes, and infertility in most males.[1] Different people may have different degrees of symptoms.[1]

CF is inherited in an autosomal recessive manner.[1] It is caused by the presence of mutations in both copies of the gene for the cystic fibrosis transmembrane conductance regulator (CFTR) protein.[1] Those with a single working copy are carriers and otherwise mostly normal.[3] CFTR is involved in production of sweat, digestive fluids, and mucus.[6] When CFTR is not functional, secretions which are usually thin instead become thick.[7] The condition is diagnosed by a sweat test and genetic testing.[1] Screening of infants at birth takes place in some areas of the world.[1]

There is no known cure for cystic fibrosis.[3] Lung infections are treated with antibiotics which may be given intravenously, inhaled, or by mouth.[1] Sometimes, the antibiotic azithromycin is used long term.[1] Inhaled hypertonic saline and salbutamol may also be useful.[1] Lung transplantation may be an option if lung function continues to worsen.[1] Pancreatic enzyme replacement and fat-soluble vitamin supplementation are important, especially in the young.[1] Airway clearance techniques such as chest physiotherapy have some short-term benefit, but long-term effects are unclear.[8] The average life expectancy is between 42 and 50 years in the developed world.[4][9] Lung problems are responsible for death in 80% of people with cystic fibrosis.[1]

CF is most common among people of Northern European ancestry and affects about one out of every 3,000 newborns.[1] About one in 25 people is a carrier.[3] It is least common in Africans and Asians.[1] It was first recognized as a specific disease by Dorothy Andersen in 1938, with descriptions that fit the condition occurring at least as far back as 1595.[5] The name “cystic fibrosis” refers to the characteristic fibrosis and cysts that form within the pancreas.[5][10]

Contents

The main signs and symptoms of cystic fibrosis are salty-tasting skin,[11] poor growth, and poor weight gain despite normal food intake,[12] accumulation of thick, sticky mucus,[13] frequent chest infections, and coughing or shortness of breath.[14] Males can be infertile due to congenital absence of the vas deferens.[15] Symptoms often appear in infancy and childhood, such as bowel obstruction due to meconium ileus in newborn babies.[16] As the children grow, they exercise to release mucus in the alveoli.[17] Ciliated epithelial cells in the person have a mutated protein that leads to abnormally viscous mucus production.[13] The poor growth in children typically presents as an inability to gain weight or height at the same rate as their peers, and is occasionally not diagnosed until investigation is initiated for poor growth. The causes of growth failure are multifactorial and include chronic lung infection, poor absorption of nutrients through the gastrointestinal tract, and increased metabolic demand due to chronic illness.[12]

In rare cases, cystic fibrosis can manifest itself as a coagulation disorder. Vitamin K is normally absorbed from breast milk, formula, and later, solid foods. This absorption is impaired in some cystic fibrosis patients. Young children are especially sensitive to vitamin K malabsorptive disorders because only a very small amount of vitamin K crosses the placenta, leaving the child with very low reserves and limited ability to absorb vitamin K from dietary sources after birth. Because factors II, VII, IX, and X (clotting factors) are vitamin Kdependent, low levels of vitamin K can result in coagulation problems. Consequently, when a child presents with unexplained bruising, a coagulation evaluation may be warranted to determine whether an underlying disease is present.[18]

Lung disease results from clogging of the airways due to mucus build-up, decreased mucociliary clearance, and resulting inflammation.[19][20] Inflammation and infection cause injury and structural changes to the lungs, leading to a variety of symptoms. In the early stages, incessant coughing, copious phlegm production, and decreased ability to exercise are common. Many of these symptoms occur when bacteria that normally inhabit the thick mucus grow out of control and cause pneumonia. In later stages, changes in the architecture of the lung, such as pathology in the major airways (bronchiectasis), further exacerbate difficulties in breathing. Other signs include coughing up blood (hemoptysis), high blood pressure in the lung (pulmonary hypertension), heart failure, difficulties getting enough oxygen to the body (hypoxia), and respiratory failure requiring support with breathing masks, such as bilevel positive airway pressure machines or ventilators.[21] Staphylococcus aureus, Haemophilus influenzae, and Pseudomonas aeruginosa are the three most common organisms causing lung infections in CF patients.[20] In addition to typical bacterial infections, people with CF more commonly develop other types of lung disease. Among these is allergic bronchopulmonary aspergillosis, in which the body’s response to the common fungus Aspergillus fumigatus causes worsening of breathing problems. Another is infection with Mycobacterium avium complex, a group of bacteria related to tuberculosis, which can cause lung damage and does not respond to common antibiotics.[22] People with CF are susceptible to getting a pneumothorax.[23]

Mucus in the paranasal sinuses is equally thick and may also cause blockage of the sinus passages, leading to infection. This may cause facial pain, fever, nasal drainage, and headaches. Individuals with CF may develop overgrowth of the nasal tissue (nasal polyps) due to inflammation from chronic sinus infections.[24] Recurrent sinonasal polyps can occur in 10% to 25% of CF patients.[20] These polyps can block the nasal passages and increase breathing difficulties.[25][26]

Cardiorespiratory complications are the most common cause of death (about 80%) in patients at most CF centers in the United States.[20]

Prior to prenatal and newborn screening, cystic fibrosis was often diagnosed when a newborn infant failed to pass feces (meconium). Meconium may completely block the intestines and cause serious illness. This condition, called meconium ileus, occurs in 510%[20] of newborns with CF. In addition, protrusion of internal rectal membranes (rectal prolapse) is more common, occurring in as many as 10% of children with CF,[20] and it is caused by increased fecal volume, malnutrition, and increased intraabdominal pressure due to coughing.[27]

The thick mucus seen in the lungs has a counterpart in thickened secretions from the pancreas, an organ responsible for providing digestive juices that help break down food. These secretions block the exocrine movement of the digestive enzymes into the duodenum and result in irreversible damage to the pancreas, often with painful inflammation (pancreatitis).[28] The pancreatic ducts are totally plugged in more advanced cases, usually seen in older children or adolescents.[20] This causes atrophy of the exocrine glands and progressive fibrosis.[20]

The lack of digestive enzymes leads to difficulty absorbing nutrients with their subsequent excretion in the feces, a disorder known as malabsorption. Malabsorption leads to malnutrition and poor growth and development because of calorie loss. Resultant hypoproteinemia may be severe enough to cause generalized edema.[20] Individuals with CF also have difficulties absorbing the fat-soluble vitamins A, D, E, and K.[29]

In addition to the pancreas problems, people with cystic fibrosis experience more heartburn,[29] intestinal blockage by intussusception, and constipation.[30] Older individuals with CF may develop distal intestinal obstruction syndrome when thickened feces cause intestinal blockage.[29]

Exocrine pancreatic insufficiency occurs in the majority (85% to 90%) of patients with CF.[20] It is mainly associated with “severe” CFTR mutations, where both alleles are completely nonfunctional (e.g. F508/F508).[20] It occurs in 10% to 15% of patients with one “severe” and one “mild” CFTR mutation where little CFTR activity still occurs, or where two “mild” CFTR mutations exist.[20] In these milder cases, sufficient pancreatic exocrine function is still present so that enzyme supplementation is not required.[20] Usually, no other GI complications occur in pancreas-sufficient phenotypes, and in general, such individuals usually have excellent growth and development.[20] Despite this, idiopathic chronic pancreatitis can occur in a subset of pancreas-sufficient individuals with CF, and is associated with recurrent abdominal pain and life-threatening complications.[20]

Thickened secretions also may cause liver problems in patients with CF. Bile secreted by the liver to aid in digestion may block the bile ducts, leading to liver damage. Over time, this can lead to scarring and nodularity (cirrhosis). The liver fails to rid the blood of toxins and does not make important proteins, such as those responsible for blood clotting.[31][32] Liver disease is the third-most common cause of death associated with CF.[20]

The pancreas contains the islets of Langerhans, which are responsible for making insulin, a hormone that helps regulate blood glucose. Damage of the pancreas can lead to loss of the islet cells, leading to a type of diabetes unique to those with the disease.[33] This cystic fibrosis-related diabetes shares characteristics that can be found in type 1 and type 2 diabetics, and is one of the principal nonpulmonary complications of CF.[34]

Vitamin D is involved in calcium and phosphate regulation. Poor uptake of vitamin D from the diet because of malabsorption can lead to the bone disease osteoporosis in which weakened bones are more susceptible to fractures.[35] In addition, people with CF often develop clubbing of their fingers and toes due to the effects of chronic illness and low oxygen in their tissues.[36][37]

Infertility affects both men and women. At least 97% of men with cystic fibrosis are infertile, but not sterile and can have children with assisted reproductive techniques.[38] The main cause of infertility in men with CF is congenital absence of the vas deferens (which normally connects the testes to the ejaculatory ducts of the penis), but potentially also by other mechanisms such as causing no sperm, abnormally shaped sperm, and few sperm with poor motility.[39] Many men found to have congenital absence of the vas deferens during evaluation for infertility have a mild, previously undiagnosed form of CF.[40] Around 20% of women with CF have fertility difficulties due to thickened cervical mucus or malnutrition. In severe cases, malnutrition disrupts ovulation and causes a lack of menstruation.[41]

CF is caused by a mutation in the gene cystic fibrosis transmembrane conductance regulator (CFTR). The most common mutation, F508, is a deletion ( signifying deletion) of three nucleotides[42] that results in a loss of the amino acid phenylalanine (F) at the 508th position on the protein. This mutation accounts for two-thirds (6670%[20]) of CF cases worldwide and 90% of cases in the United States; however, over 1500 other mutations can produce CF.[43] Although most people have two working copies (alleles) of the CFTR gene, only one is needed to prevent cystic fibrosis. CF develops when neither allele can produce a functional CFTR protein. Thus, CF is considered an autosomal recessive disease.

The CFTR gene, found at the q31.2 locus of chromosome 7, is 230,000 base pairs long, and creates a protein that is 1,480 amino acids long. More specifically, the location is between base pair 117,120,016 and 117,308,718 on the long arm of chromosome 7, region 3, band 1, subband 2, represented as 7q31.2. Structurally, CFTR is a type of gene known as an ABC gene. The product of this gene (the CFTR protein) is a chloride ion channel important in creating sweat, digestive juices, and mucus. This protein possesses two ATP-hydrolyzing domains, which allows the protein to use energy in the form of ATP. It also contains two domains comprising six alpha helices apiece, which allow the protein to cross the cell membrane. A regulatory binding site on the protein allows activation by phosphorylation, mainly by cAMP-dependent protein kinase.[21] The carboxyl terminal of the protein is anchored to the cytoskeleton by a PDZ domain interaction.[44]

In addition, the evidence is increasing that genetic modifiers besides CFTR modulate the frequency and severity of the disease. One example is mannan-binding lectin, which is involved in innate immunity by facilitating phagocytosis of microorganisms. Polymorphisms in one or both mannan-binding lectin alleles that result in lower circulating levels of the protein are associated with a threefold higher risk of end-stage lung disease, as well as an increased burden of chronic bacterial infections.[20]

Several mutations in the CFTR gene can occur, and different mutations cause different defects in the CFTR protein, sometimes causing a milder or more severe disease. These protein defects are also targets for drugs which can sometimes restore their function. F508-CFTR, which occurs in >90% of patients in the U.S., creates a protein that does not fold normally and is not appropriately transported to the cell membrane, resulting in its degradation. Other mutations result in proteins that are too short (truncated) because production is ended prematurely. Other mutations produce proteins that do not use energy (in the form of ATP) normally, do not allow chloride, iodide, and thiocyanate to cross the membrane appropriately,[45] and degrade at a faster rate than normal. Mutations may also lead to fewer copies of the CFTR protein being produced.[21]

The protein created by this gene is anchored to the outer membrane of cells in the sweat glands, lungs, pancreas, and all other remaining exocrine glands in the body. The protein spans this membrane and acts as a channel connecting the inner part of the cell (cytoplasm) to the surrounding fluid. This channel is primarily responsible for controlling the movement of halogens from inside to outside of the cell; however, in the sweat ducts, it facilitates the movement of chloride from the sweat duct into the cytoplasm. When the CFTR protein does not resorb ions in sweat ducts, chloride and thiocyanate[46] released from sweat glands are trapped inside the ducts and pumped to the skin. Additionally hypothiocyanite, OSCN, cannot be produced by the immune defense system.[47][48] Because chloride is negatively charged, this modifies the electrical potential inside and outside the cell that normally causes cations to cross into the cell. Sodium is the most common cation in the extracellular space. The excess chloride within sweat ducts prevents sodium resorption by epithelial sodium channels and the combination of sodium and chloride creates the salt, which is lost in high amounts in the sweat of individuals with CF. This lost salt forms the basis for the sweat test.[21]

Most of the damage in CF is due to blockage of the narrow passages of affected organs with thickened secretions. These blockages lead to remodeling and infection in the lung, damage by accumulated digestive enzymes in the pancreas, blockage of the intestines by thick feces, etc. Several theories have been posited on how the defects in the protein and cellular function cause the clinical effects. The most current theory suggests that defective ion transport leads to dehydration in the airway epithelia, thickening mucus. In airway epithelial cells, the cilia exist in between the cell’s apical surface and mucus in a layer known as airway surface liquid (ASL). The flow of ions from the cell and into this layer is determined by ion channels such as CFTR. CFTR not only allows chloride ions to be drawn from the cell and into the ASL, but it also regulates another channel called ENac, which allows sodium ions to leave the ASL and enter the respiratory epithelium. CFTR normally inhibits this channel, but if the CFTR is defective, then sodium flows freely from the ASL and into the cell. As water follows sodium, the depth of ASL will be depleted and the cilia will be left in the mucous layer.[49] As cilia cannot effectively move in a thick, viscous environment, mucociliary clearance is deficient and a buildup of mucus occurs, clogging small airways.[50] The accumulation of more viscous, nutrient-rich mucus in the lungs allows bacteria to hide from the body’s immune system, causing repeated respiratory infections. The presence of the same CFTR proteins in the pancreatic duct and sweat glands in the skin also cause symptoms in these systems.

The lungs of individuals with cystic fibrosis are colonized and infected by bacteria from an early age. These bacteria, which often spread among individuals with CF, thrive in the altered mucus, which collects in the small airways of the lungs. This mucus leads to the formation of bacterial microenvironments known as biofilms that are difficult for immune cells and antibiotics to penetrate. Viscous secretions and persistent respiratory infections repeatedly damage the lung by gradually remodeling the airways, which makes infection even more difficult to eradicate.[51]

Over time, both the types of bacteria and their individual characteristics change in individuals with CF. In the initial stage, common bacteria such as S. aureus and H. influenzae colonize and infect the lungs.[20] Eventually, Pseudomonas aeruginosa (and sometimes Burkholderia cepacia) dominates. By 18 years of age, 80% of patients with classic CF harbor P. aeruginosa, and 3.5% harbor B. cepacia.[20] Once within the lungs, these bacteria adapt to the environment and develop resistance to commonly used antibiotics. Pseudomonas can develop special characteristics that allow the formation of large colonies, known as “mucoid” Pseudomonas, which are rarely seen in people who do not have CF.[51] Scientific evidences suggest the interleukin 17 pathway plays a key role in resistance and modulation of the inflammatory response during P. aeruginosa infection in CF.[52] In particular, interleukin 17-mediated immunity plays a double-edged activity during chronic airways infection; on one side, it contributes to the control of P. aeruginosa burden, while on the other, it propagates exacerbated pulmonary neutrophilia and tissue remodeling.[52]

Infection can spread by passing between different individuals with CF.[53] In the past, people with CF often participated in summer “CF camps” and other recreational gatherings.[54][55] Hospitals grouped patients with CF into common areas and routine equipment (such as nebulizers)[56] was not sterilized between individual patients.[57] This led to transmission of more dangerous strains of bacteria among groups of patients. As a result, individuals with CF are now routinely isolated from one another in the healthcare setting, and healthcare providers are encouraged to wear gowns and gloves when examining patients with CF to limit the spread of virulent bacterial strains.[58]

CF patients may also have their airways chronically colonized by filamentous fungi (such as Aspergillus fumigatus, Scedosporium apiospermum, Aspergillus terreus) and/or yeasts (such as Candida albicans); other filamentous fungi less commonly isolated include Aspergillus flavus and Aspergillus nidulans (occur transiently in CF respiratory secretions) and Exophiala dermatitidis and Scedosporium prolificans (chronic airway-colonizers); some filamentous fungi such as Penicillium emersonii and Acrophialophora fusispora are encountered in patients almost exclusively in the context of CF.[59] Defective mucociliary clearance characterizing CF is associated with local immunological disorders. In addition, the prolonged therapy with antibiotics and the use of corticosteroid treatments may also facilitate fungal growth. Although the clinical relevance of the fungal airway colonization is still a matter of debate, filamentous fungi may contribute to the local inflammatory response and therefore to the progressive deterioration of the lung function, as often happens with allergic bronchopulmonary aspergillosis the most common fungal disease in the context of CF, involving a Th2-driven immune response to Aspergillus species.[59][60]

Cystic fibrosis may be diagnosed by many different methods, including newborn screening, sweat testing, and genetic testing.[61] As of 2006 in the United States, 10% of cases are diagnosed shortly after birth as part of newborn screening programs. The newborn screen initially measures for raised blood concentration of immunoreactive trypsinogen.[62] Infants with an abnormal newborn screen need a sweat test to confirm the CF diagnosis. In many cases, a parent makes the diagnosis because the infant tastes salty.[20] Immunoreactive trypsinogen levels can be increased in individuals who have a single mutated copy of the CFTR gene (carriers) or, in rare instances, in individuals with two normal copies of the CFTR gene. Due to these false positives, CF screening in newborns can be controversial.[63][64] Most U.S. states and countries do not screen for CF routinely at birth. Therefore, most individuals are diagnosed after symptoms (e.g. sinopulmonary disease and GI manifestations[20]) prompt an evaluation for cystic fibrosis. The most commonly used form of testing is the sweat test. Sweat testing involves application of a medication that stimulates sweating (pilocarpine). To deliver the medication through the skin, iontophoresis is used, whereby one electrode is placed onto the applied medication and an electric current is passed to a separate electrode on the skin. The resultant sweat is then collected on filter paper or in a capillary tube and analyzed for abnormal amounts of sodium and chloride. People with CF have increased amounts of them in their sweat. In contrast, people with CF have less thiocyanate and hypothiocyanite in their saliva[65] and mucus (Banfi et al.). In the case of milder forms of CF, transepithelial potential difference measurements can be helpful. CF can also be diagnosed by identification of mutations in the CFTR gene.[66]

People with CF may be listed in a disease registry that allows researchers and doctors to track health results and identify candidates for clinical trials.[67]

Women who are pregnant or couples planning a pregnancy can have themselves tested for the CFTR gene mutations to determine the risk that their child will be born with CF. Testing is typically performed first on one or both parents and, if the risk of CF is high, testing on the fetus is performed. The American College of Obstetricians and Gynecologists recommends all people thinking of becoming pregnant be tested to see if they are a carrier.[68]

Because development of CF in the fetus requires each parent to pass on a mutated copy of the CFTR gene and because CF testing is expensive, testing is often performed initially on one parent. If testing shows that parent is a CFTR gene mutation carrier, the other parent is tested to calculate the risk that their children will have CF. CF can result from more than a thousand different mutations.[69] As of 2016, typically only the most common mutations are tested for, such as F508[69] Most commercially available tests look for 32 or fewer different mutations. If a family has a known uncommon mutation, specific screening for that mutation can be performed. Because not all known mutations are found on current tests, a negative screen does not guarantee that a child will not have CF.[70]

During pregnancy, testing can be performed on the placenta (chorionic villus sampling) or the fluid around the fetus (amniocentesis). However, chorionic villus sampling has a risk of fetal death of one in 100 and amniocentesis of one in 200;[71] a recent study has indicated this may be much lower, about one in 1,600.[72]

Economically, for carrier couples of cystic fibrosis, when comparing preimplantation genetic diagnosis (PGD) with natural conception (NC) followed by prenatal testing and abortion of affected pregnancies, PGD provides net economic benefits up to a maternal age around 40 years, after which NC, prenatal testing, and abortion have higher economic benefit.[73]

While no cures for CF are known, several treatment methods are used. The management of CF has improved significantly over the past 70 years. While infants born with it 70 years ago would have been unlikely to live beyond their first year, infants today are likely to live well into adulthood. Recent advances in the treatment of cystic fibrosis have meant that individuals with cystic fibrosis can live a fuller life less encumbered by their condition. The cornerstones of management are the proactive treatment of airway infection, and encouragement of good nutrition and an active lifestyle. Pulmonary rehabilitation as a management of CF continues throughout a person’s life, and is aimed at maximizing organ function, and therefore the quality of life. At best, current treatments delay the decline in organ function. Because of the wide variation in disease symptoms, treatment typically occurs at specialist multidisciplinary centers and is tailored to the individual. Targets for therapy are the lungs, gastrointestinal tract (including pancreatic enzyme supplements), the reproductive organs (including assisted reproductive technology), and psychological support.[62]

The most consistent aspect of therapy in CF is limiting and treating the lung damage caused by thick mucus and infection, with the goal of maintaining quality of life. Intravenous, inhaled, and oral antibiotics are used to treat chronic and acute infections. Mechanical devices and inhalation medications are used to alter and clear the thickened mucus. These therapies, while effective, can be extremely time-consuming.

Many people with CF are on one or more antibiotics at all times, even when healthy, to prophylactically suppress infection. Antibiotics are absolutely necessary whenever pneumonia is suspected or a noticeable decline in lung function is seen, and are usually chosen based on the results of a sputum analysis and the person’s past response. This prolonged therapy often necessitates hospitalization and insertion of a more permanent IV such as a peripherally inserted central catheter or Port-a-Cath. Inhaled therapy with antibiotics such as tobramycin, colistin, and aztreonam is often given for months at a time to improve lung function by impeding the growth of colonized bacteria.[74][75][76] Inhaled antibiotic therapy helps lung function by fighting infection, but also has significant drawbacks such as development of antibiotic resistance, tinnitus, and changes in the voice.[77] Inhaled levofloxacin may be used to treat Pseudomonas aeruginosa in people with cystic fibrosis who are infected.[78] The early management of Pseudomonas aeruginosa infection is easier and better, using nebulised antibiotics with or without oral antibiotics may sustain its eradication up to 2 years.[79]

Antibiotics by mouth such as ciprofloxacin or azithromycin are given to help prevent infection or to control ongoing infection.[80] The aminoglycoside antibiotics (e.g. tobramycin) used can cause hearing loss, damage to the balance system in the inner ear or kidney failure with long-term use.[81] To prevent these side-effects, the amount of antibiotics in the blood is routinely measured and adjusted accordingly.

All these factors related to the antibiotics use, the chronicity of the disease, and the emergence of resistant bacteria demand more exploration for different strategies such as antibiotic adjuvant therapy.[82]

Aerosolized medications that help loosen secretions include dornase alfa and hypertonic saline.[83] Dornase is a recombinant human deoxyribonuclease, which breaks down DNA in the sputum, thus decreasing its viscosity.[84] Denufosol, an investigational drug, opens an alternative chloride channel, helping to liquefy mucus.[85] Whether inhaled corticosteroids are useful is unclear, but stopping inhaled corticosteroid therapy is safe.[86] There is weak evidence that corticosteroid treatment may cause harm by interfering with growth.[86] Pneumococcal vaccination has not been studied as of 2014.[87] As of 2014, there is no clear evidence from randomized controlled trials that the influenza vaccine is beneficial for people with cystic fibrosis.[88]

Ivacaftor is a medication taken by mouth for the treatment of CF due to a number of specific mutations.[89][90] It improves lung function by about 10%; however, as of 2014 it is expensive.[89] The first year it was on the market, the list price was over $300,000 per year in the United States.[89] In July 2015, the U.S. Food and Drug Administration approved lumacaftor, a chaperone for protein folding, for use in combination with ivacaftor.

In 2018, the FDA approved the combination ivacaftor/tezacaftor; the manufacturer announced a list price of $292,000 per year.[91] Tezacaftor helps move the CFTR protein to the correct position on the cell surface, and is designed to treat people with the F508del mutation.[92]

Several mechanical techniques are used to dislodge sputum and encourage its expectoration. One technique is chest physiotherapy were a respiratory therapist percusses an individual’s chest by hand several times a day, to loosen up secretions. This “percussive effect” can be administered also through specific devices that device chest wall oscillation or intrapulmonary percussive ventilator. Other methods such as biphasic cuirass ventilation, and associated clearance mode available in such devices, integrate a cough assistance phase, as well as a vibration phase for dislodging secretions. These are portable and adapted for home use. Chest physiotherapy is beneficial for short-term airway clearance.[8]

Another technique is positive expiratory pressure physiotherapy that consists of providing a back pressure to the airways during expiration. This effect is provided by devices that consists of a mask or a mouthpiece in which a resistance is applied only on the expiration phase.[93] Operating principles of this technique seems to be the increase of gas pressure behind mucus through collateral ventilation along with a temporary increase in functional residual capacity preventing the early collapse of small airways during exhalation.[94][95]

As lung disease worsens, mechanical breathing support may become necessary. Individuals with CF may need to wear special masks at night to help push air into their lungs. These machines, known as bilevel positive airway pressure (BiPAP) ventilators, help prevent low blood oxygen levels during sleep. Non-invasive ventilators may be used during physical therapy to improve sputum clearance.[96] It is not known if this type of therapy has an impact on pulmonary exacerbations or disease progression.[96] It is notknown what role non-invasive ventilation therapy has for improving exercise capacity in people with cystic fibrosis.[96] During severe illness, a tube may be placed in the throat (a procedure known as a tracheostomy) to enable breathing supported by a ventilator.

For children, preliminary studies show massage therapy may help people and their families’ quality of life.[97]

Some lung infections require surgical removal of the infected part of the lung. If this is necessary many times, lung function is severely reduced.[98] The most effective treatment options for people with CF who have spontaneous or recurrent pneumothoraces is not clear.[23]

Lung transplantation often becomes necessary for individuals with CF as lung function and exercise tolerance decline. Although single lung transplantation is possible in other diseases, individuals with CF must have both lungs replaced because the remaining lung might contain bacteria that could infect the transplanted lung. A pancreatic or liver transplant may be performed at the same time to alleviate liver disease and/or diabetes.[99] Lung transplantation is considered when lung function declines to the point where assistance from mechanical devices is required or someone’s survival is threatened.[100]

Newborns with intestinal obstruction typically require surgery, whereas adults with distal intestinal obstruction syndrome typically do not. Treatment of pancreatic insufficiency by replacement of missing digestive enzymes allows the duodenum to properly absorb nutrients and vitamins that would otherwise be lost in the feces. However, the best dosage and form of pancreatic enzyme replacement is unclear, as are the risks and long-term effectiveness of this treatment.[101]

So far, no large-scale research involving the incidence of atherosclerosis and coronary heart disease in adults with cystic fibrosis has been conducted. This is likely because the vast majority of people with cystic fibrosis do not live long enough to develop clinically significant atherosclerosis or coronary heart disease.

Diabetes is the most common nonpulmonary complication of CF. It mixes features of type 1 and type 2 diabetes, and is recognized as a distinct entity, cystic fibrosis-related diabetes.[34][102] While oral antidiabetic drugs are sometimes used, the recommended treatment is the use of insulin injections or an insulin pump,[103] and, unlike in type 1 and 2 diabetes, dietary restrictions are not recommended.[34]

There is no strong evidence that people with cystic fibrosis can prevent osteoporosis by increasing their intake of vitamin D.[104] Bisphosphonates taken by mouth or intravenously can be used to improve the bone mineral density in people with cystic fibrosis.[105] When taking bisphosphates intravenously, adverse effects such as pain and flu-like symptoms can be an issue.[105] The adverse effects of bisphosphates taken by mouth on the gastrointestinal tract are not known.[105]

Poor growth may be avoided by insertion of a feeding tube for increasing food energy through supplemental feeds or by administration of injected growth hormone.[106]

Sinus infections are treated by prolonged courses of antibiotics. The development of nasal polyps or other chronic changes within the nasal passages may severely limit airflow through the nose, and over time reduce the person’s sense of smell. Sinus surgery is often used to alleviate nasal obstruction and to limit further infections. Nasal steroids such as fluticasone are used to decrease nasal inflammation.[107]

Female infertility may be overcome by assisted reproduction technology, particularly embryo transfer techniques. Male infertility caused by absence of the vas deferens may be overcome with testicular sperm extraction, collecting sperm cells directly from the testicles. If the collected sample contains too few sperm cells to likely have a spontaneous fertilization, intracytoplasmic sperm injection can be performed.[108] Third party reproduction is also a possibility for women with CF. Whether taking antioxidants affects outcomes is unclear.[109]

The prognosis for cystic fibrosis has improved due to earlier diagnosis through screening and better treatment and access to health care. In 1959, the median age of survival of children with CF in the United States was six months.[110] In 2010, survival is estimated to be 37 years for women and 40 for men.[111] In Canada, median survival increased from 24 years in 1982 to 47.7 in 2007.[112]

In the US, of those with CF who are more than 18 years old as of 2009, 92% had graduated from high school, 67% had at least some college education, 15% were disabled, 9% were unemployed, 56% were single, and 39% were married or living with a partner.[113]

Chronic illnesses can be very difficult to manage. CF is a chronic illness that affects the “digestive and respiratory tracts resulting in generalized malnutrition and chronic respiratory infections”.[114] The thick secretions clog the airways in the lungs, which often cause inflammation and severe lung infections.[115][116] If it is compromised, it affects the quality of life (QOL) of someone with CF and their ability to complete such tasks as everyday chores. According to Schmitz and Goldbeck (2006), CF significantly increases emotional stress on both the individual and the family, “and the necessary time-consuming daily treatment routine may have further negative effects on quality of life”.[117] However, Havermans and colleagues (2006) have shown that young outpatients with CF who have participated in the Cystic Fibrosis Questionnaire-Revised “rated some QOL domains higher than did their parents”.[118] Consequently, outpatients with CF have a more positive outlook for themselves. Furthermore, many ways can improve the QOL in CF patients. Exercise is promoted to increase lung function. Integrating an exercise regimen into the CF patients daily routine can significantly improve QOL.[119] No definitive cure for CF is known, but diverse medications are used, such as mucolytics, bronchodilators, steroids, and antibiotics, that have the purpose of loosening mucus, expanding airways, decreasing inflammation, and fighting lung infections, respectively.[120]

Cystic fibrosis is the most common life-limiting autosomal recessive disease among people of European heritage.[122] In the United States, about 30,000 individuals have CF; most are diagnosed by six months of age. In Canada, about 4,000 people have CF.[123] Around 1 in 25 people of European descent, and one in 30 of Caucasian Americans,[124] is a carrier of a CF mutation. Although CF is less common in these groups, roughly one in 46 Hispanics, one in 65 Africans, and one in 90 Asians carry at least one abnormal CFTR gene.[125][126] Ireland has the world’s highest prevalence of CF, at one in 1353.[127]

Although technically a rare disease, CF is ranked as one of the most widespread life-shortening genetic diseases. It is most common among nations in the Western world. An exception is Finland, where only one in 80 people carries a CF mutation.[128] The World Health Organization states, “In the European Union, one in 20003000 newborns is found to be affected by CF”.[129] In the United States, one in 3,500 children is born with CF.[130] In 1997, about one in 3,300 Caucasian children in the United States was born with CF. In contrast, only one in 15,000 African American children suffered from it, and in Asian Americans, the rate was even lower at one in 32,000.[131]

Cystic fibrosis is diagnosed in males and females equally. For reasons that remain unclear, data have shown that males tend to have a longer life expectancy than females,[132][133] but recent studies suggest this gender gap may no longer exist perhaps due to improvements in health care facilities,[134][135] while a recent study from Ireland identified a link between the female hormone estrogen and worse outcomes in CF.[136]

The distribution of CF alleles varies among populations. The frequency of F508 carriers has been estimated at one in 200 in northern Sweden, one in 143 in Lithuanians, and one in 38 in Denmark. No F508 carriers were found among 171 Finns and 151 Saami people.[137] F508 does occur in Finland, but it is a minority allele there. CF is known to occur in only 20 families (pedigrees) in Finland.[138]

The F508 mutation is estimated to be up to 52,000 years old.[139] Numerous hypotheses have been advanced as to why such a lethal mutation has persisted and spread in the human population. Other common autosomal recessive diseases such as sickle-cell anemia have been found to protect carriers from other diseases, an evolutionary trade-off known as heterozygote advantage. Resistance to the following have all been proposed as possible sources of heterozygote advantage:

CF is supposed to have appeared about 3,000 BC because of migration of peoples, gene mutations, and new conditions in nourishment.[148] Although the entire clinical spectrum of CF was not recognized until the 1930s, certain aspects of CF were identified much earlier. Indeed, literature from Germany and Switzerland in the 18th century warned “Wehe dem Kind, das beim Ku auf die Stirn salzig schmeckt, es ist verhext und muss bald sterben” or “Woe to the child who tastes salty from a kiss on the brow, for he is cursed and soon must die”, recognizing the association between the salt loss in CF and illness.[148]

In the 19th century, Carl von Rokitansky described a case of fetal death with meconium peritonitis, a complication of meconium ileus associated with CF. Meconium ileus was first described in 1905 by Karl Landsteiner.[148] In 1936, Guido Fanconi described a connection between celiac disease, cystic fibrosis of the pancreas, and bronchiectasis.[149]

In 1938, Dorothy Hansine Andersen published an article, “Cystic Fibrosis of the Pancreas and Its Relation to Celiac Disease: a Clinical and Pathological Study”, in the American Journal of Diseases of Children. She was the first to describe the characteristic cystic fibrosis of the pancreas and to correlate it with the lung and intestinal disease prominent in CF.[10] She also first hypothesized that CF was a recessive disease and first used pancreatic enzyme replacement to treat affected children. In 1952, Paul di SantAgnese discovered abnormalities in sweat electrolytes; a sweat test was developed and improved over the next decade.[150]

The first linkage between CF and another marker (Paroxonase) was found in 1985 by Hans Eiberg, indicating that only one locus exists for CF. In 1988, the first mutation for CF, F508 was discovered by Francis Collins, Lap-Chee Tsui, and John R. Riordan on the seventh chromosome. Subsequent research has found over 1,000 different mutations that cause CF.

Because mutations in the CFTR gene are typically small, classical genetics techniques had been unable to accurately pinpoint the mutated gene.[151] Using protein markers, gene-linkage studies were able to map the mutation to chromosome 7. Chromosome-walking and -jumping techniques were then used to identify and sequence the gene.[152] In 1989, Lap-Chee Tsui led a team of researchers at the Hospital for Sick Children in Toronto that discovered the gene responsible for CF. CF represents a classic example of how a human genetic disorder was elucidated strictly by the process of forward genetics.

This section needs to be updated. In particular: Research needs update; Orkambi needs to be moved to Treatments section. Please update this article to reflect recent events or newly available information. (April 2018)

Gene therapy has been explored as a potential cure for CF. Results from clinical trials have shown limited success as of 2016, and using gene therapy as routine therapy is not suggested.[153] A small study published in 2015 found a small benefit.[154]

The focus of much CF gene therapy research is aimed at trying to place a normal copy of the CFTR gene into affected cells. Transferring the normal CFTR gene into the affected epithelium cells would result in the production of functional CFTR protein in all target cells, without adverse reactions or an inflammation response. To prevent the lung manifestations of CF, only 510% the normal amount of CFTR gene expression is needed.[155] Multiple approaches have been tested for gene transfer, such as liposomes and viral vectors in animal models and clinical trials. However, both methods were found to be relatively inefficient treatment options,[156] mainly because very few cells take up the vector and express the gene, so the treatment has little effect. Additionally, problems have been noted in cDNA recombination, such that the gene introduced by the treatment is rendered unusable.[157] There has been a functional repair in culture of CFTR by CRISPR/Cas9 in intestinal stem cell organoids of cystic fibrosis patients.[158]

A number of small molecules that aim at compensating various mutations of the CFTR gene are under development. One approach is to develop drugs that get the ribosome to overcome the stop codon and synthesize a full-length CFTR protein. About 10% of CF results from a premature stop codon in the DNA, leading to early termination of protein synthesis and truncated proteins. These drugs target nonsense mutations such as G542X, which consists of the amino acid glycine in position 542 being replaced by a stop codon. Aminoglycoside antibiotics interfere with protein synthesis and error-correction. In some cases, they can cause the cell to overcome a premature stop codon by inserting a random amino acid, thereby allowing expression of a full-length protein.[159] The aminoglycoside gentamicin has been used to treat lung cells from CF patients in the laboratory to induce the cells to grow full-length proteins.[160] Another drug targeting nonsense mutations is ataluren, which is undergoing Phase III clinical trials as of October2011[update].[161] Lumacaftor/ivacaftor was approved by the FDA in July 2015.[162]

It is unclear as of 2014 if ursodeoxycholic acid is useful for those with cystic fibrosis-related liver disease.[163]

Read the rest here:

Cystic fibrosis – Wikipedia, the free encyclopedia

Posted in Cf

Gene therapy – Wikipedia

In the medicine field, gene therapy (also called human gene transfer) is the therapeutic delivery of nucleic acid into a patient’s cells as a drug to treat disease.[1][2] The first attempt at modifying human DNA was performed in 1980 by Martin Cline, but the first successful nuclear gene transfer in humans, approved by the National Institutes of Health, was performed in May 1989.[3] The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990.

Between 1989 and February 2016, over 2,300 clinical trials had been conducted, more than half of them in phase I.[4]

Not all medical procedures that introduce alterations to a patient’s genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients.[5] Gene therapy is defined by the precision of the procedure and the intention of direct therapeutic effects.

Gene therapy was conceptualized in 1972, by authors who urged caution before commencing human gene therapy studies.

The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation) was performed by Martin Cline on 10 July 1980.[6][7] Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified[8] and even if he is correct, it’s unlikely it produced any significant beneficial effects treating beta-thalassemia.

After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on 14 September 1990, when Ashi DeSilva was treated for ADA-SCID.[9]

The first somatic treatment that produced a permanent genetic change was performed in 1993.[citation needed]

Gene therapy is a way to fix a genetic problem at its source. The polymers are either translated into proteins, interfere with target gene expression, or possibly correct genetic mutations.

The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a “vector”, which carries the molecule inside cells.

Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers’ attention, although as of 2014, it was still largely an experimental technique.[10] These include treatment of retinal diseases Leber’s congenital amaurosis[11][12][13][14] and choroideremia,[15] X-linked SCID,[16] ADA-SCID,[17][18] adrenoleukodystrophy,[19] chronic lymphocytic leukemia (CLL),[20] acute lymphocytic leukemia (ALL),[21] multiple myeloma,[22] haemophilia,[18] and Parkinson’s disease.[23] Between 2013 and April 2014, US companies invested over $600 million in the field.[24]

The first commercial gene therapy, Gendicine, was approved in China in 2003 for the treatment of certain cancers.[25] In 2011 Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia.[26] In 2012 Glybera, a treatment for a rare inherited disorder, became the first treatment to be approved for clinical use in either Europe or the United States after its endorsement by the European Commission.[10][27]

Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered replacing or disrupting defective genes.[28] Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia, and sickle cell anemia. Glybera treats one such disease, caused by a defect in lipoprotein lipase.[27]

DNA must be administered, reach the damaged cells, enter the cell and either express or disrupt a protein.[29] Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome.[30][31] Naked DNA approaches have also been explored, especially in the context of vaccine development.[32]

Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR. The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. As of 2014 these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients.[33]

Gene editing is a potential approach to alter the human genome to treat genetic diseases,[34] viral diseases,[35] and cancer.[36] As of 2016 these approaches were still years from being medicine.[37][38]

Gene therapy may be classified into two types:

In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete, germ cell, gametocyte, or undifferentiated stem cell. Any such modifications affect the individual patient only, and are not inherited by offspring. Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid) is used to treat disease.

Over 600 clinical trials utilizing SCGT are underway in the US. Most focus on severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages.[39]

In germline gene therapy (GGT), germ cells (sperm or egg cells) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism’s cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland, and the Netherlands[40] prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations[40] and higher risks versus SCGT.[41] The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general).[40][42][43][44]

The delivery of DNA into cells can be accomplished by multiple methods. The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods).

In order to replicate, viruses introduce their genetic material into the host cell, tricking the host’s cellular machinery into using it as blueprints for viral proteins. Retroviruses go a stage further by having their genetic material copied into the genome of the host cell. Scientists exploit this by substituting a virus’s genetic material with therapeutic DNA. (The term ‘DNA’ may be an oversimplification, as some viruses contain RNA, and gene therapy could take this form as well.) A number of viruses have been used for human gene therapy, including retroviruses, adenoviruses, herpes simplex, vaccinia, and adeno-associated virus.[4] Like the genetic material (DNA or RNA) in viruses, therapeutic DNA can be designed to simply serve as a temporary blueprint that is degraded naturally or (at least theoretically) to enter the host’s genome, becoming a permanent part of the host’s DNA in infected cells.

Non-viral methods present certain advantages over viral methods, such as large scale production and low host immunogenicity. However, non-viral methods initially produced lower levels of transfection and gene expression, and thus lower therapeutic efficacy. Later technology remedied this deficiency[citation needed].

Methods for non-viral gene therapy include the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles.

Some of the unsolved problems include:

Three patients’ deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger in 1999. Jesse Gelsinger died because of immune rejection response.[51] One X-SCID patient died of leukemia in 2003.[9] In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy.[52]

In 1972 Friedmann and Roblin authored a paper in Science titled “Gene therapy for human genetic disease?”[53] Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those who suffer from genetic defects.[54]

In 1984 a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes.[55]

The first approved gene therapy clinical research in the US took place on 14 September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson.[56] Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with ADA-SCID, a severe immune system deficiency. The defective gene of the patient’s blood cells was replaced by the functional variant. Ashantis immune system was partially restored by the therapy. Production of the missing enzyme was temporarily stimulated, but the new cells with functional genes were not generated. She led a normal life only with the regular injections performed every two months. The effects were successful, but temporary.[57]

Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993).[58] The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH protocolno.1602 November 24, 1993,[59] and by the FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena.

In 1992 Claudio Bordignon, working at the Vita-Salute San Raffaele University, performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases.[60] In 2002 this work led to the publication of the first successful gene therapy treatment for adenosine deaminase deficiency (ADA-SCID). The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or “bubble boy” disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial’s Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy, and Germany.[61]

In 1993 Andrew Gobea was born with SCID following prenatal genetic screening. Blood was removed from his mother’s placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew’s blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed.[62]

Jesse Gelsinger’s death in 1999 impeded gene therapy research in the US.[63][64] As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices.[65]

The modified cancer gene therapy strategy of antisense IGF-I RNA (NIH n 1602)[59] using antisense / triple helix anti-IGF-I approach was registered in 2002 by Wiley gene therapy clinical trial – n 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma, cancers of liver, colon, prostate, uterus, and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This anti-gene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena.

Sickle-cell disease can be treated in mice.[66] The mice which have essentially the same defect that causes human cases used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production.[67]

A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia, cystic fibrosis and some cancers.[68]

Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane.[69]

In 2003 a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol, which, unlike viral vectors, are small enough to cross the bloodbrain barrier.[70]

Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.[71]

Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma.[25]

In March researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and damages the immune system. The study is the first to show that gene therapy can treat the myeloid system.[72]

In May a team reported a way to prevent the immune system from rejecting a newly delivered gene.[73] Similar to organ transplantation, gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs. This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene.

In August scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells.[74]

In November researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope. In a phase I clinical trial, five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in a US human clinical trial.[75][76]

In May researchers announced the first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007.[77]

Leber’s congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April.[11] Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects.[11][12][13][14]

In September researchers were able to give trichromatic vision to squirrel monkeys.[78] In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.[79]

An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs.[80]

In September it was announced that an 18-year-old male patient in France with beta-thalassemia major had been successfully treated.[81] Beta-thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions.[82] The technique used a lentiviral vector to transduce the human -globin gene into purified blood and marrow cells obtained from the patient in June 2007.[83] The patient’s haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed.[83][84] Further clinical trials were planned.[85] Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor.[84]

Cancer immunogene therapy using modified antigene, antisense/triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14 December 2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers were treated (Trojan et al. 2016).[86][87]

In 2007 and 2008, a man (Timothy Ray Brown) was cured of HIV by repeated hematopoietic stem cell transplantation (see also allogeneic stem cell transplantation, allogeneic bone marrow transplantation, allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011.[88] It required complete ablation of existing bone marrow, which is very debilitating.

In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease.[20] In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free.[89]

Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.[90][91]

In 2011 Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia; it delivers the gene encoding for VEGF.[92][26] Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF.[93][94]

The FDA approved Phase 1 clinical trials on thalassemia major patients in the US for 10 participants in July.[95] The study was expected to continue until 2015.[85]

In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency, which can cause severe pancreatitis.[96] The recommendation was endorsed by the European Commission in November 2012[10][27][97][98] and commercial rollout began in late 2014.[99] Alipogene tiparvovec was expected to cost around $1.6 million per treatment in 2012,[100] revised to $1 million in 2015,[101] making it the most expensive medicine in the world at the time.[102] As of 2016, only one person had been treated with drug.[103]

In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission “or very close to it” three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1, which exist only on cancerous myeloma cells.[22]

In March researchers reported that three of five adult subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B-cells, cancerous or not. The researchers believed that the patients’ immune systems would make normal T-cells and B-cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease.[21]

Following encouraging Phase 1 trials, in April, researchers announced they were starting Phase 2 clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients[104] at several hospitals to combat heart disease. The therapy was designed to increase the levels of SERCA2, a protein in heart muscles, improving muscle function.[105] The FDA granted this a Breakthrough Therapy Designation to accelerate the trial and approval process.[106] In 2016 it was reported that no improvement was found from the CUPID 2 trial.[107]

In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 732 months. Three of the children had metachromatic leukodystrophy, which causes children to lose cognitive and motor skills.[108] The other children had Wiskott-Aldrich syndrome, which leaves them to open to infection, autoimmune diseases, and cancer.[109] Follow up trials with gene therapy on another six children with Wiskott-Aldrich syndrome were also reported as promising.[110][111]

In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress.[18] In 2014 a further 18 children with ADA-SCID were cured by gene therapy.[112] ADA-SCID children have no functioning immune system and are sometimes known as “bubble children.”[18]

Also in October researchers reported that they had treated six hemophilia sufferers in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor.[18][113]

In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1. Over a six-month to two-year period all had improved their sight.[114][115] By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting.[15] Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight.

In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation (CCR5 deficiency) known to protect against HIV with promising results.[116][117]

Clinical trials of gene therapy for sickle cell disease were started in 2014.[118][119] There is a need for high quality randomised controlled trials assessing the risks and benefits involved with gene therapy for people with sickle cell disease.[120]

In February LentiGlobin BB305, a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA “breakthrough” status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease.[121]

In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV; the monkeys’ cells produced the antibody, which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza, and hepatitis were underway.[122][123]

In March, scientists, including an inventor of CRISPR, Jennifer Doudna, urged a worldwide moratorium on germline gene therapy, writing “scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans” until the full implications “are discussed among scientific and governmental organizations”.[124][125][126][127]

In October, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T-cells genetically engineered using TALEN to attack cancer cells. One year after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]).[128] Children with highly aggressive ALL normally have a very poor prognosis and Layla’s disease had been regarded as terminal before the treatment.[129]

In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies[130] but that basic research including embryo gene editing should continue.[131]

In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis[132][133] and the European Commission approved it in June.[134] This treats children born with adenosine deaminase deficiency and who have no functioning immune system. This was the second gene therapy treatment to be approved in Europe.[135]

In October, Chinese scientists reported they had started a trial to genetically modify T-cells from 10 adult patients with lung cancer and reinject the modified T-cells back into their bodies to attack the cancer cells. The T-cells had the PD-1 protein (which stops or slows the immune response) removed using CRISPR-Cas9.[136][137]

A 2016 Cochrane systematic review looking at data from four trials on topical cystic fibrosis transmembrane conductance regulator (CFTR) gene therapy does not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections. One of the four trials did find weak evidence that liposome-based CFTR gene transfer therapy may lead to a small respiratory improvement for people with CF. This weak evidence is not enough to make a clinical recommendation for routine CFTR gene therapy.[138]

In February Kite Pharma announced results from a clinical trial of CAR-T cells in around a hundred people with advanced Non-Hodgkin lymphoma.[139]

In March, French scientists reported on clinical research of gene therapy to treat sickle-cell disease.[140]

In August, the FDA approved tisagenlecleucel for acute lymphoblastic leukemia.[141] Tisagenlecleucel is an adoptive cell transfer therapy for B-cell acute lymphoblastic leukemia; T cells from a person with cancer are removed, genetically engineered to make a specific T-cell receptor (a chimeric T cell receptor, or “CAR-T”) that reacts to the cancer, and are administered back to the person. The T cells are engineered to target a protein called CD19 that is common on B cells. This is the first form of gene therapy to be approved in the United States. In October, a similar therapy called axicabtagene ciloleucel was approved for non-Hodgkin lymphoma.[142]

In December the results of using an adeno-associated virus with blood clotting factor VIII to treat nine haemophilia A patients were published. Six of the seven patients on the high dose regime increased the level of the blood clotting VIII to normal levels. The low and medium dose regimes had no effect on the patient’s blood clotting levels.[143][144]

In December, the FDA approved Luxturna, the first in vivo gene therapy, for the treatment of blindness due to Leber’s congenital amaurosis.[145] The price of this treatment was 850,000 US dollars for both eyes.[146][147] CRISPR gene editing technology has also been used on mice to treat deafness due to the DFNA36 mutation, which also affects humans.[148]

Speculated uses for gene therapy include:

Gene Therapy techniques have the potential to provide alternative treatments for those with infertility. Recently, successful experimentation on mice has proven that fertility can be restored by using the gene therapy method, CRISPR.[149] Spermatogenical stem cells from another organism were transplanted into the testes of an infertile male mouse. The stem cells re-established spermatogenesis and fertility.[150]

Athletes might adopt gene therapy technologies to improve their performance.[151] Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports.[152]

Genetic engineering could be used to cure diseases, but also to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases.[153][154][155] For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery.[156][157] Another theorist claims that moral concerns limit but do not prohibit germline engineering.[158]

Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Associations Council on Ethical and Judicial Affairs stated that “genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics.”[159]

As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools,[160] and such concerns have continued as technology progressed.[161][162] With the advent of new techniques like CRISPR, in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited.[124][125][126][127] In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.[149][163] A committee of the American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017[164][165] once answers have been found to safety and efficiency problems “but only for serious conditions under stringent oversight.”[166]

Regulations covering genetic modification are part of general guidelines about human-involved biomedical research. There are no international treaties which are legally binding in this area, but there are recommendations for national laws from various bodies.

The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association’s General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001 provides a legal baseline for all countries. HUGOs document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research.[167]

No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services, the FDA and NIH’s Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering,) must obey international and federal guidelines for the protection of human subjects.[168]

NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects.

An NIH advisory committee published a set of guidelines on gene manipulation.[169] The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient.[170] The protocol for a gene therapy clinical trial must be approved by the NIH’s Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial.[169]

As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board.[171][172]

Gene therapy is the basis for the plotline of the film I Am Legend[173] and the TV show Will Gene Therapy Change the Human Race?.[174] In 1994, gene therapy was a plot element in The Erlenmeyer Flask, The X-Files’ first season finale. It is also used in Stargate as a means of allowing humans to use Ancient technology.[175]

Read the original:

Gene therapy – Wikipedia

Gene Therapy – Sumanas, Inc.

Gene Therapy

A few years ago, a clinical trial began in France in the hope of curing children with a type of genetic immune deficiency called SCID-X1. Children with this disease have a defective gene, called gamma-c, which prevents a subset of the cells of the immune system from forming, and predisposes the children to life-threatening infections. In an attempt to cure the childrenwho would otherwise die at a young agephysicians used gene therapy to provide them with normal gamma-c genes.

This particular trial has had striking success as well as tragedy. Eight of the eleven children are currently thriving. However, in two cases the therapy successfully introduced gamma-c genes, but these children have since developed leukemia. In both children, a gamma-c gene inserted next to another gene, called LMO2. The LMO2 gene has previously been linked to leukemia, and scientists speculate that the insertion of the gamma-c gene next to LMO2 may have overstimulated the gene, causing T cells to proliferate in excess. An LMO2 effect, in combination with the proliferation-inducing effects of the gamma-c gene itself, may be the cause of the leukemia in these two patients. Scientists are still investigating other possible causes.

From this single trial, it is clear that gene therapy holds significant promise, yet it is also clear that it poses significant risks. To learn more about the application of gene therapy in SCID, view the accompanying animation.

Read more from the original source:

Gene Therapy – Sumanas, Inc.

Gene Therapy Net – News, Conferences, Vectors, Literature …

Posted on: 22 March 2018, source: GizmodoOn Tuesday, a 13-year-old boy from New Jersey was at the center of medical history as he became the first person in the US to receive an FDA-approved gene therapy for an inherited disease. The event marks the beginning of a new era of medicine, one in which devastating genetic conditions that we are born with can be simply edited out of our DNA with the help of modern biomedical technologies. The therapy, Luxturna, from Spark Therepeutics, was approved by the FDA in December to treat a rare, inherited form of blindness. Its price tag, set at $850,000or $425,000 per eyemade it the most expensive drug in the US and sparked mass sticker-shock. But the therapy, which in high-profile clinical trials has allowed patients to see the stars for the first times, also offered the almost miraculous possibility of giving sight to the blind.

Read more:

Gene Therapy Net – News, Conferences, Vectors, Literature …

Gene therapy – Mayo Clinic

Overview

Gene therapy involves altering the genes inside your body’s cells in an effort to treat or stop disease.

Genes contain your DNA the code that controls much of your body’s form and function, from making you grow taller to regulating your body systems. Genes that don’t work properly can cause disease.

Gene therapy replaces a faulty gene or adds a new gene in an attempt to cure disease or improve your body’s ability to fight disease. Gene therapy holds promise for treating a wide range of diseases, such as cancer, cystic fibrosis, heart disease, diabetes, hemophilia and AIDS.

Researchers are still studying how and when to use gene therapy. Currently, in the United States, gene therapy is available only as part of a clinical trial.

Gene therapy is used to correct defective genes in order to cure a disease or help your body better fight disease.

Researchers are investigating several ways to do this, including:

Gene therapy has some potential risks. A gene can’t easily be inserted directly into your cells. Rather, it usually has to be delivered using a carrier, called a vector.

The most common gene therapy vectors are viruses because they can recognize certain cells and carry genetic material into the cells’ genes. Researchers remove the original disease-causing genes from the viruses, replacing them with the genes needed to stop disease.

This technique presents the following risks:

The gene therapy clinical trials underway in the U.S. are closely monitored by the Food and Drug Administration and the National Institutes of Health to ensure that patient safety issues are a top priority during research.

Currently, the only way for you to receive gene therapy is to participate in a clinical trial. Clinical trials are research studies that help doctors determine whether a gene therapy approach is safe for people. They also help doctors understand the effects of gene therapy on the body.

Your specific procedure will depend on the disease you have and the type of gene therapy being used.

For example, in one type of gene therapy:

Viruses aren’t the only vectors that can be used to carry altered genes into your body’s cells. Other vectors being studied in clinical trials include:

The possibilities of gene therapy hold much promise. Clinical trials of gene therapy in people have shown some success in treating certain diseases, such as:

But several significant barriers stand in the way of gene therapy becoming a reliable form of treatment, including:

Gene therapy continues to be a very important and active area of research aimed at developing new, effective treatments for a variety of diseases.

Explore Mayo Clinic studies testing new treatments, interventions and tests as a means to prevent, detect, treat or manage this disease.

Dec. 29, 2017

See original here:

Gene therapy – Mayo Clinic

Vectors in gene therapy – Wikipedia

Gene therapy utilizes the delivery of DNA into cells, which can be accomplished by several methods, summarized below. The two major classes of methods are those that use recombinant viruses (sometimes called biological nanoparticles or viral vectors) and those that use naked DNA or DNA complexes (non-viral methods).

All viruses bind to their hosts and introduce their genetic material into the host cell as part of their replication cycle. This genetic material contains basic ‘instructions’ of how to produce more copies of these viruses, hacking the body’s normal production machinery to serve the needs of the virus. The host cell will carry out these instructions and produce additional copies of the virus, leading to more and more cells becoming infected. Some types of viruses insert their genome into the host’s cytoplasm, but do not actually enter the cell. Others penetrate the cell membrane disguised as protein molecules and enter the cell.

There are two main types of virus infection: lytic and lysogenic. Shortly after inserting its DNA, viruses of the lytic cycle quickly produce more viruses, burst from the cell and infect more cells. Lysogenic viruses integrate their DNA into the DNA of the host cell and may live in the body for many years before responding to a trigger. The virus reproduces as the cell does and does not inflict bodily harm until it is triggered. The trigger releases the DNA from that of the host and employs it to create new viruses.

The genetic material in retroviruses is in the form of RNA molecules, while the genetic material of their hosts is in the form of DNA. When a retrovirus infects a host cell, it will introduce its RNA together with some enzymes, namely reverse transcriptase and integrase, into the cell. This RNA molecule from the retrovirus must produce a DNA copy from its RNA molecule before it can be integrated into the genetic material of the host cell. The process of producing a DNA copy from an RNA molecule is termed reverse transcription. It is carried out by one of the enzymes carried in the virus, called reverse transcriptase. After this DNA copy is produced and is free in the nucleus of the host cell, it must be incorporated into the genome of the host cell. That is, it must be inserted into the large DNA molecules in the cell (the chromosomes). This process is done by another enzyme carried in the virus called integrase.

Now that the genetic material of the virus has been inserted, it can be said that the host cell has been modified to contain new genes. If this host cell divides later, its descendants will all contain the new genes. Sometimes the genes of the retrovirus do not express their information immediately.

One of the problems of gene therapy using retroviruses is that the integrase enzyme can insert the genetic material of the virus into any arbitrary position in the genome of the host; it randomly inserts the genetic material into a chromosome. If genetic material happens to be inserted in the middle of one of the original genes of the host cell, this gene will be disrupted (insertional mutagenesis). If the gene happens to be one regulating cell division, uncontrolled cell division (i.e., cancer) can occur. This problem has recently begun to be addressed by utilizing zinc finger nucleases[1] or by including certain sequences such as the beta-globin locus control region to direct the site of integration to specific chromosomal sites.

Gene therapy trials using retroviral vectors to treat X-linked severe combined immunodeficiency (X-SCID) represent the most successful application of gene therapy to date. More than twenty patients have been treated in France and Britain, with a high rate of immune system reconstitution observed. Similar trials were restricted or halted in the USA when leukemia was reported in patients treated in the French X-SCID gene therapy trial.[citation needed] To date, four children in the French trial and one in the British trial have developed leukemia as a result of insertional mutagenesis by the retroviral vector. All but one of these children responded well to conventional anti-leukemia treatment. Gene therapy trials to treat SCID due to deficiency of the Adenosine Deaminase (ADA) enzyme (one form of SCID)[2] continue with relative success in the USA, Britain, Ireland, Italy and Japan.

Adenoviruses are viruses that carry their genetic material in the form of double-stranded DNA. They cause respiratory, intestinal, and eye infections in humans (especially the common cold). When these viruses infect a host cell, they introduce their DNA molecule into the host. The genetic material of the adenoviruses is not incorporated (transient) into the host cell’s genetic material. The DNA molecule is left free in the nucleus of the host cell, and the instructions in this extra DNA molecule are transcribed just like any other gene. The only difference is that these extra genes are not replicated when the cell is about to undergo cell division so the descendants of that cell will not have the extra gene. As a result, treatment with the adenovirus will require readministration in a growing cell population although the absence of integration into the host cell’s genome should prevent the type of cancer seen in the SCID trials. This vector system has been promoted for treating cancer and indeed the first gene therapy product to be licensed to treat cancer, Gendicine, is an adenovirus. Gendicine, an adenoviral p53-based gene therapy was approved by the Chinese food and drug regulators in 2003 for treatment of head and neck cancer. Advexin, a similar gene therapy approach from Introgen, was turned down by the US Food and Drug Administration (FDA) in 2008.

Concerns about the safety of adenovirus vectors were raised after the 1999 death of Jesse Gelsinger while participating in a gene therapy trial. Since then, work using adenovirus vectors has focused on genetically crippled versions of the virus.

The viral vectors described above have natural host cell populations that they infect most efficiently. Retroviruses have limited natural host cell ranges, and although adenovirus and adeno-associated virus are able to infect a relatively broader range of cells efficiently, some cell types are refractory to infection by these viruses as well. Attachment to and entry into a susceptible cell is mediated by the protein envelope on the surface of a virus. Retroviruses and adeno-associated viruses have a single protein coating their membrane, while adenoviruses are coated with both an envelope protein and fibers that extend away from the surface of the virus. The envelope proteins on each of these viruses bind to cell-surface molecules such as heparin sulfate, which localizes them upon the surface of the potential host, as well as with the specific protein receptor that either induces entry-promoting structural changes in the viral protein, or localizes the virus in endosomes wherein acidification of the lumen induces this refolding of the viral coat. In either case, entry into potential host cells requires a favorable interaction between a protein on the surface of the virus and a protein on the surface of the cell. For the purposes of gene therapy, one might either want to limit or expand the range of cells susceptible to transduction by a gene therapy vector. To this end, many vectors have been developed in which the endogenous viral envelope proteins have been replaced by either envelope proteins from other viruses, or by chimeric proteins. Such chimera would consist of those parts of the viral protein necessary for incorporation into the virion as well as sequences meant to interact with specific host cell proteins. Viruses in which the envelope proteins have been replaced as described are referred to as pseudotyped viruses. For example, the most popular retroviral vector for use in gene therapy trials has been the lentivirus Simian immunodeficiency virus coated with the envelope proteins, G-protein, from Vesicular stomatitis virus. This vector is referred to as VSV G-pseudotyped lentivirus, and infects an almost universal set of cells. This tropism is characteristic of the VSV G-protein with which this vector is coated. Many attempts have been made to limit the tropism of viral vectors to one or a few host cell populations. This advance would allow for the systemic administration of a relatively small amount of vector. The potential for off-target cell modification would be limited, and many concerns from the medical community would be alleviated. Most attempts to limit tropism have used chimeric envelope proteins bearing antibody fragments. These vectors show great promise for the development of “magic bullet” gene therapies.

A replication-competent vector called ONYX-015 is used in replicating tumor cells. It was found that in the absence of the E1B-55Kd viral protein, adenovirus caused very rapid apoptosis of infected, p53(+) cells, and this results in dramatically reduced virus progeny and no subsequent spread. Apoptosis was mainly the result of the ability of EIA to inactivate p300. In p53(-) cells, deletion of E1B 55kd has no consequence in terms of apoptosis, and viral replication is similar to that of wild-type virus, resulting in massive killing of cells.

A replication-defective vector deletes some essential genes. These deleted genes are still necessary in the body so they are replaced with either a helper virus or a DNA molecule.

[3]

Replication-defective vectors always contain a transfer construct. The transfer construct carries the gene to be transduced or transgene. The transfer construct also carries the sequences which are necessary for the general functioning of the viral genome: packaging sequence, repeats for replication and, when needed, priming of reverse transcription. These are denominated cis-acting elements, because they need to be on the same piece of DNA as the viral genome and the gene of interest. Trans-acting elements are viral elements, which can be encoded on a different DNA molecule. For example, the viral structural proteins can be expressed from a different genetic element than the viral genome.

[3]

The Herpes simplex virus is a human neurotropic virus. This is mostly examined for gene transfer in the nervous system. The wild type HSV-1 virus is able to infect neurons and evade the host immune response, but may still become reactivated and produce a lytic cycle of viral replication. Therefore, it is typical to use mutant strains of HSV-1 that are deficient in their ability to replicate. Though the latent virus is not transcriptionally apparent, it does possess neuron specific promoters that can continue to function normally[further explanation needed]. Antibodies to HSV-1 are common in humans, however complications due to herpes infection are somewhat rare.[4] Caution for rare cases of encephalitis must be taken and this provides some rationale to using HSV-2 as a viral vector as it generally has tropism for neuronal cells innervating the urogenital area of the body and could then spare the host of severe pathology in the brain.

Non-viral methods present certain advantages over viral methods, with simple large scale production and low host immunogenicity being just two. Previously, low levels of transfection and expression of the gene held non-viral methods at a disadvantage; however, recent advances in vector technology have yielded molecules and techniques with transfection efficiencies similar to those of viruses.[5]

This is the simplest method of non-viral transfection. Clinical trials carried out of intramuscular injection of a naked DNA plasmid have occurred with some success; however, the expression has been very low in comparison to other methods of transfection. In addition to trials with plasmids, there have been trials with naked PCR product, which have had similar or greater success. Cellular uptake of naked DNA is generally inefficient. Research efforts focusing on improving the efficiency of naked DNA uptake have yielded several novel methods, such as electroporation, sonoporation, and the use of a “gene gun”, which shoots DNA coated gold particles into the cell using high pressure gas.[6]

Electroporation is a method that uses short pulses of high voltage to carry DNA across the cell membrane. This shock is thought to cause temporary formation of pores in the cell membrane, allowing DNA molecules to pass through. Electroporation is generally efficient and works across a broad range of cell types. However, a high rate of cell death following electroporation has limited its use, including clinical applications.

More recently a newer method of electroporation, termed electron-avalanche transfection, has been used in gene therapy experiments. By using a high-voltage plasma discharge, DNA was efficiently delivered following very short (microsecond) pulses. Compared to electroporation, the technique resulted in greatly increased efficiency and less cellular damage.

The use of particle bombardment, or the gene gun, is another physical method of DNA transfection. In this technique, DNA is coated onto gold particles and loaded into a device which generates a force to achieve penetration of the DNA into the cells, leaving the gold behind on a “stopping” disk.

Sonoporation uses ultrasonic frequencies to deliver DNA into cells. The process of acoustic cavitation is thought to disrupt the cell membrane and allow DNA to move into cells.

In a method termed magnetofection, DNA is complexed to magnetic particles, and a magnet is placed underneath the tissue culture dish to bring DNA complexes into contact with a cell monolayer.

Hydrodynamic delivery involves rapid injection of a high volume of a solution into vasculature (such as into the inferior vena cava, bile duct, or tail vein). The solution contains molecules that are to be inserted into cells, such as DNA plasmids or siRNA, and transfer of these molecules into cells is assisted by the elevated hydrostatic pressure caused by the high volume of injected solution.[7][8][9]

The use of synthetic oligonucleotides in gene therapy is to deactivate the genes involved in the disease process. There are several methods by which this is achieved. One strategy uses antisense specific to the target gene to disrupt the transcription of the faulty gene. Another uses small molecules of RNA called siRNA to signal the cell to cleave specific unique sequences in the mRNA transcript of the faulty gene, disrupting translation of the faulty mRNA, and therefore expression of the gene. A further strategy uses double stranded oligodeoxynucleotides as a decoy for the transcription factors that are required to activate the transcription of the target gene. The transcription factors bind to the decoys instead of the promoter of the faulty gene, which reduces the transcription of the target gene, lowering expression. Additionally, single stranded DNA oligonucleotides have been used to direct a single base change within a mutant gene. The oligonucleotide is designed to anneal with complementarity to the target gene with the exception of a central base, the target base, which serves as the template base for repair. This technique is referred to as oligonucleotide mediated gene repair, targeted gene repair, or targeted nucleotide alteration.

To improve the delivery of the new DNA into the cell, the DNA must be protected from damage and positively charged. Initially, anionic and neutral lipids were used for the construction of lipoplexes for synthetic vectors. However, in spite of the facts that there is little toxicity associated with them, that they are compatible with body fluids and that there was a possibility of adapting them to be tissue specific; they are complicated and time consuming to produce so attention was turned to the cationic versions.

Cationic lipids, due to their positive charge, were first used to condense negatively charged DNA molecules so as to facilitate the encapsulation of DNA into liposomes. Later it was found that the use of cationic lipids significantly enhanced the stability of lipoplexes. Also as a result of their charge, cationic liposomes interact with the cell membrane, endocytosis was widely believed as the major route by which cells uptake lipoplexes. Endosomes are formed as the results of endocytosis, however, if genes can not be released into cytoplasm by breaking the membrane of endosome, they will be sent to lysosomes where all DNA will be destroyed before they could achieve their functions. It was also found that although cationic lipids themselves could condense and encapsulate DNA into liposomes, the transfection efficiency is very low due to the lack of ability in terms of endosomal escaping. However, when helper lipids (usually electroneutral lipids, such as DOPE) were added to form lipoplexes, much higher transfection efficiency was observed. Later on, it was figured out that certain lipids have the ability to destabilize endosomal membranes so as to facilitate the escape of DNA from endosome, therefore those lipids are called fusogenic lipids. Although cationic liposomes have been widely used as an alternative for gene delivery vectors, a dose dependent toxicity of cationic lipids were also observed which could limit their therapeutic usages.

The most common use of lipoplexes has been in gene transfer into cancer cells, where the supplied genes have activated tumor suppressor control genes in the cell and decrease the activity of oncogenes. Recent studies have shown lipoplexes to be useful in transfecting respiratory epithelial cells.

Polymersomes are synthetic versions of liposomes (vesicles with a lipid bilayer), made of amphiphilic block copolymers. They can encapsulate either hydrophilic or hydrophobic contents and can be used to deliver cargo such as DNA, proteins, or drugs to cells. Advantages of polymersomes over liposomes include greater stability, mechanical strength, blood circulation time, and storage capacity.[10][11][12]

Complexes of polymers with DNA are called polyplexes. Most polyplexes consist of cationic polymers and their fabrication is based on self-assembly by ionic interactions. One important difference between the methods of action of polyplexes and lipoplexes is that polyplexes cannot directly release their DNA load into the cytoplasm. As a result, co-transfection with endosome-lytic agents such as inactivated adenovirus was required to facilitate nanoparticle escape from the endocytic vesicle made during particle uptake. However, a better understanding of the mechanisms by which DNA can escape from endolysosomal pathway, i.e. proton sponge effect,[13] has triggered new polymer synthesis strategies such as incorporation of protonable residues in polymer backbone and has revitalized research on polycation-based systems.[14]

Due to their low toxicity, high loading capacity, and ease of fabrication, polycationic nanocarriers demonstrate great promise compared to their rivals such as viral vectors which show high immunogenicity and potential carcinogenicity, and lipid-based vectors which cause dose dependence toxicity. Polyethyleneimine[15] and chitosan are among the polymeric carriers that have been extensively studies for development of gene delivery therapeutics. Other polycationic carriers such as poly(beta-amino esters)[16] and polyphosphoramidate[17] are being added to the library of potential gene carriers. In addition to the variety of polymers and copolymers, the ease of controlling the size, shape, surface chemistry of these polymeric nano-carriers gives them an edge in targeting capability and taking advantage of enhanced permeability and retention effect.[18]

A dendrimer is a highly branched macromolecule with a spherical shape. The surface of the particle may be functionalized in many ways and many of the properties of the resulting construct are determined by its surface.

In particular it is possible to construct a cationic dendrimer, i.e. one with a positive surface charge. When in the presence of genetic material such as DNA or RNA, charge complimentarity leads to a temporary association of the nucleic acid with the cationic dendrimer. On reaching its destination the dendrimer-nucleic acid complex is then taken into the cell via endocytosis.

In recent years the benchmark for transfection agents has been cationic lipids. Limitations of these competing reagents have been reported to include: the lack of ability to transfect some cell types, the lack of robust active targeting capabilities, incompatibility with animal models, and toxicity. Dendrimers offer robust covalent construction and extreme control over molecule structure, and therefore size. Together these give compelling advantages compared to existing approaches.

Producing dendrimers has historically been a slow and expensive process consisting of numerous slow reactions, an obstacle that severely curtailed their commercial development. The Michigan-based company Dendritic Nanotechnologies discovered a method to produce dendrimers using kinetically driven chemistry, a process that not only reduced cost by a magnitude of three, but also cut reaction time from over a month to several days. These new “Priostar” dendrimers can be specifically constructed to carry a DNA or RNA payload that transfects cells at a high efficiency with little or no toxicity.[citation needed]

Inorganic nanoparticles, such as gold, silica, iron oxide (ex. magnetofection) and calcium phosphates have been shown to be capable of gene delivery.[19] Some of the benefits of inorganic vectors is in their storage stability, low manufacturing cost and often time, low immunogenicity, and resistance to microbial attack. Nanosized materials less than 100nm have been shown to efficiently trap the DNA or RNA and allows its escape from the endosome without degradation. Inorganics have also been shown to exhibit improved in vitro transfection for attached cell lines due to their increased density and preferential location on the base of the culture dish. Quantum dots have also been used successfully and permits the coupling of gene therapy with a stable fluorescence marker. Engineered organic nanoparticles are also under development, which could be used for co-delivery of genes and therapeutic agents.[20]

Cell-penetrating peptides (CPPs), also known as peptide transduction domains (PTDs), are short peptides (

CPP cargo can be directed into specific cell organelles by incorporating localization sequences into CPP sequences. For example, nuclear localization sequences are commonly used to guide CPP cargo into the nucleus.[23] For guidance into mitochondria, a mitochondrial targeting sequence can be used; this method is used in protofection (a technique that allows for foreign mitochondrial DNA to be inserted into cells’ mitochondria).[24][25]

Due to every method of gene transfer having shortcomings, there have been some hybrid methods developed that combine two or more techniques. Virosomes are one example; they combine liposomes with an inactivated HIV or influenza virus. This has been shown to have more efficient gene transfer in respiratory epithelial cells than either viral or liposomal methods alone. Other methods involve mixing other viral vectors with cationic lipids or hybridising viruses.

Read the original post:

Vectors in gene therapy – Wikipedia

Gene Therapy | Pfizer: One of the world’s premier …

Gene therapy is a technology aimed at correcting or fixing a gene that may be defective. This exciting and potentially transformative area of research is focused on the development of potential treatments for monogenic diseases, or diseases that are caused by a defect in one gene.

The technology involves the introduction of genetic material (DNA or RNA) into the body, often through delivering a corrected copy of a gene to a patients cells to compensate for a defective one, using a viral vector.

The technology involves the introduction of genetic material (DNA or RNA) into the body, often through delivering a corrected copy of a gene to a patients cells to compensate for a defective one, using a viral vector.

Viral vectors can be developed using adeno-associated virus (AAV), a naturally occurring virus which has been adapted for gene therapy use. Its ability to deliver genetic material to a wide range of tissues makes AAV vectors useful for transferring therapeutic genes into target cells. Gene therapy research holds tremendous promise in leading to the possible development of highly-specialized, potentially one-time delivery treatments for patients suffering from rare, monogenic diseases.

Pfizer aims to build an industry-leading gene therapy platform with a strategy focused on establishing a transformational portfolio through in-house capabilities, and enhancing those capabilities through strategic collaborations, as well as potential licensing and M&A activities.

We’re working to access the most effective vector designs available to build a robust clinical stage portfolio, and employing a scalable manufacturing approach, proprietary cell lines and sophisticated analytics to support clinical development.

In addition, we’re collaborating with some of the foremost experts in this field, through collaborations with Spark Therapeutics, Inc., on a potentially transformative gene therapy treatment for hemophilia B, which received Breakthrough Therapy designation from the US Food and Drug Administration, and 4D Molecular Therapeutics to discover and develop targeted next-generation AAV vectors for cardiac disease.

Gene therapy holds the promise of bringing true disease modification for patients suffering from devastating diseases, a promise were working to seeing become a reality in the years to come.

Read this article:

Gene Therapy | Pfizer: One of the world’s premier …

Golden Rule – Wikipedia

The Golden Rule (which can be considered a law of reciprocity in some religions) is the principle of treating others as one would wish to be treated. It is a maxim that is found in many religions and cultures.[1][2] The maxim may appear as either a positive or negative injunction governing conduct:

The Golden Rule differs from the maxim of reciprocity captured in do ut des”I give so that you will give in return”and is rather a unilateral moral commitment to the well-being of the other without the expectation of anything in return.[3]

The concept occurs in some form in nearly every religion[4][5] and ethical tradition[6] and is often considered the central tenet of Christian ethics[7][8]. It can also be explained from the perspectives of psychology, philosophy, sociology, human evolution, and economics. Psychologically, it involves a person empathizing with others. Philosophically, it involves a person perceiving their neighbor also as “I” or “self”.[9] Sociologically, “love your neighbor as yourself” is applicable between individuals, between groups, and also between individuals and groups. In evolution, “reciprocal altruism” is seen as a distinctive advance in the capacity of human groups to survive and reproduce, as their exceptional brains demanded exceptionally long childhoods and ongoing provision and protection even beyond that of the immediate family.[10] In economics, Richard Swift, referring to ideas from David Graeber, suggests that “without some kind of reciprocity society would no longer be able to exist.”[11]

The term “Golden Rule”, or “Golden law”, began to be used widely in the early 17th century in Britain by Anglican theologians and preachers;[12] the earliest known usage is that of Anglicans Charles Gibbon and Thomas Jackson in 1604.[1][13]

Possibly the earliest affirmation of the maxim of reciprocity, reflecting the ancient Egyptian goddess Ma’at, appears in the story of The Eloquent Peasant, which dates to the Middle Kingdom (c. 20401650 BC): “Now this is the command: Do to the doer to make him do.”[14][15] This proverb embodies the do ut des principle.[16] A Late Period (c. 664323 BC) papyrus contains an early negative affirmation of the Golden Rule: “That which you hate to be done to you, do not do to another.”[17]

In Mahbhrata, the ancient epic of India, there is a discourse in which the wise minister Vidura advises the King Yuddhihhira

Listening to wise scriptures, austerity, sacrifice, respectful faith, social welfare, forgiveness, purity of intent, compassion, truth and self-controlare the ten wealth of character (self). O king aim for these, may you be steadfast in these qualities. These are the basis of prosperity and rightful living. These are highest attainable things. All worlds are balanced on dharma, dharma encompasses ways to prosperity as well. O King, dharma is the best quality to have, wealth the medium and desire (kma) the lowest. Hence, (keeping these in mind), by self-control and by making dharma (right conduct) your main focus, treat others as you treat yourself.

Mahbhrata Shnti-Parva 167:9

In the Section on Virtue, and Chapter 32 of the Tirukkua (c. 200 BC c. 500 AD), Tiruvalluvar says: “Do not do to others what you know has hurt yourself” (K. 316.); “Why does one hurt others knowing what it is to be hurt?” (K. 318). He furthermore opined that it is the determination of the spotless (virtuous) not to do evil, even in return, to those who have cherished enmity and done them evil. (K. 312) The (proper) punishment to those who have done evil (to you), is to put them to shame by showing them kindness, in return and to forget both the evil and the good done on both sides (K. 314)

The Golden Rule in its prohibitive (negative) form was a common principle in ancient Greek philosophy. Examples of the general concept include:

The Pahlavi Texts of Zoroastrianism (c. 300 BC1000 AD) were an early source for the Golden Rule: “That nature alone is good which refrains from doing to another whatsoever is not good for itself.” Dadisten-I-dinik, 94,5, and “Whatever is disagreeable to yourself do not do unto others.” Shayast-na-Shayast 13:29[22]

Seneca the Younger (c. 4 BC65 AD), a practitioner of Stoicism (c. 300 BC200 AD) expressed the Golden Rule in his essay regarding the treatment of slaves: “Treat your inferior as you would wish your superior to treat you.”[23]

According to Simon Blackburn, the Golden Rule “can be found in some form in almost every ethical tradition”.[24]

A rule of altruistic reciprocity was first stated positively in a well-known Torah verse (Hebrew: ):

You shall not take vengeance or bear a grudge against your kinsfolk. Love your neighbor as yourself: I am the LORD.

Hillel the Elder (c. 110 BC 10 AD),[25] used this verse as a most important message of the Torah for his teachings. Once, he was challenged by a gentile who asked to be converted under the condition that the Torah be explained to him while he stood on one foot. Hillel accepted him as a candidate for conversion to Judaism but, drawing on Leviticus 19:18, briefed the man:

What is hateful to you, do not do to your fellow: this is the whole Torah; the rest is the explanation; go and learn.

Hillel recognized brotherly love as the fundamental principle of Jewish ethics. Rabbi Akiva agreed and suggested that the principle of love must have its foundation in Genesis chapter 1, which teaches that all men are the offspring of Adam, who was made in the image of God (Sifra, edoshim, iv.; Yer. Ned. ix. 41c; Genesis Rabba 24).[26] According to Jewish rabbinic literature, the first man Adam represents the unity of mankind. This is echoed in the modern preamble of the Universal Declaration of Human Rights.[27][28] And it is also taught, that Adam is last in order according to the evolutionary character of God’s creation:[26]

Why was only a single specimen of man created first? To teach us that he who destroys a single soul destroys a whole world and that he who saves a single soul saves a whole world; furthermore, so no race or class may claim a nobler ancestry, saying, ‘Our father was born first’; and, finally, to give testimony to the greatness of the Lord, who caused the wonderful diversity of mankind to emanate from one type. And why was Adam created last of all beings? To teach him humility; for if he be overbearing, let him remember that the little fly preceded him in the order of creation.[26]

The Jewish Publication Society’s edition of Leviticus states:

Thou shalt not hate thy brother. in thy heart; thou shalt surely rebuke thy neighbour, and not bear sin because of him. 18 Thou shalt not take vengeance, nor bear any grudge against the children of thy people, but thou shalt love thy neighbour as thyself: I am the LORD.[29]

This Torah verse represents one of several versions of the Golden Rule, which itself appears in various forms, positive and negative. It is the earliest written version of that concept in a positive form.[30]

At the turn of the eras, the Jewish rabbis were discussing the scope of the meaning of Leviticus 19:18 and 19:34 extensively:

The stranger who resides with you shall be to you as one of your citizens; you shall love him as yourself, for you were strangers in the land of Egypt: I the LORD am your God.

Commentators summed up foreigners (= Samaritans), proselytes (= ‘strangers who resides with you’) (Rabbi Akiva, bQuid 75b) or Jews (Rabbi Gamaliel, yKet 3, 1; 27a) to the scope of the meaning.

On the verse, “Love your fellow as yourself,” the classic commentator Rashi quotes from Torat Kohanim, an early Midrashic text regarding the famous dictum of Rabbi Akiva: “Love your fellow as yourself Rabbi Akiva says this is a great principle of the Torah.”[31]

Israel’s postal service quoted from the previous Leviticus verse when it commemorated the Universal Declaration of Human Rights on a 1958 postage stamp.[32]

The “Golden Rule” was given by Jesus of Nazareth, who used it to summarize the Torah: “Do to others what you want them to do to you.” and “This is the meaning of the law of Moses and the teaching of the prophets”[33] (Matthew 7:12 NCV, see also Luke 6:31). The common English phrasing is “Do unto others as you would have them do unto you”. A similar form of the phrase appeared in a Catholic catechism around 1567 (certainly in the reprint of 1583).[34] The Golden Rule is stated positively numerous times in the Hebrew Pentateuch as well as the Prophets and Writings. Leviticus 19:18 (“Forget about the wrong things people do to you, and do not try to get even. Love your neighbor as you love yourself.”; see also Great Commandment) and Leviticus 19:34 (“But treat them just as you treat your own citizens. Love foreigners as you love yourselves, because you were foreigners one time in Egypt. I am the Lord your God.”).

The Old Testament Deuterocanonical books of Tobit and Sirach, accepted as part of the Scriptural canon by Catholic Church, Eastern Orthodoxy, and the Non-Chalcedonian Churches, express a negative form of the golden rule:

“Do to no one what you yourself dislike.”

Tobit 4:15

“Recognize that your neighbor feels as you do, and keep in mind your own dislikes.”

Sirach 31:15

Two passages in the New Testament quote Jesus of Nazareth espousing the positive form of the Golden rule:

Matthew 7:12

Do to others what you want them to do to you. This is the meaning of the law of Moses and the teaching of the prophets.

Luke 6:31

Do to others what you would want them to do to you.

A similar passage, a parallel to the Great Commandment, is Luke 10:25-28

25And one day an authority on the law stood up to put Jesus to the test. “Teacher,” he asked, “what must I do to receive eternal life?”

26What is written in the Law?” Jesus replied. “How do you understand it?” 27He answered, ” Love the Lord your God with all your heart and with all your soul. Love him with all your strength and with all your mind.(Deuteronomy 6:5) And, Love your neighbor as you love yourself. ” 28″You have answered correctly,” Jesus replied. “Do that, and you will live.”.

The passage in the book of Luke then continues with Jesus answering the question, “Who is my neighbor?”, by telling the parable of the Good Samaritan, indicating that “your neighbor” is anyone in need.[35] This extends to all, including those who are generally considered hostile.

Jesus’ teaching goes beyond the negative formulation of not doing what one would not like done to themselves, to the positive formulation of actively doing good to another that, if the situations were reversed, one would desire that the other would do for them. This formulation, as indicated in the parable of the Good Samaritan, emphasizes the needs for positive action that brings benefit to another, not simply restraining oneself from negative activities that hurt another. Taken as a rule of judgment, both formulations of the golden rule, the negative and positive, are equally applicable.[36]

In one passage of the New Testament, Paul the Apostle refers to the golden rule:

Galatians 5:14

14For all the law is fulfilled in one word, even in this; Thou shalt love thy neighbour as thyself.

The Arabian peninsula was known to not practice the golden rule prior to the advent of Islam. “Pre-Islamic Arabs regarded the survival of the tribe, as most essential and to be ensured by the ancient rite of blood vengeance” [37]

However, this all changed when Muhammad came on the scene: “Fakir al-Din al-Razi and several other Qur’anic commentators have pointed out that Qur’an 83:1-6 is an implicit statement of the Golden Rule, which is explicitly stated in the tradition, “Pay, Oh Children of Adam, as you would love to be paid, and be just as you would love to have justice!” [38]

“Similar examples of the golden rule are found in the hadith of the prophet Muhammad. The hadith recount what the prophet is believed to have said and done, and traditionally Muslims regard the hadith as second to only the Qur’an as a guide to correct belief and action.” [39]

From the hadith, the collected oral and written accounts of Muhammad and his teachings during his lifetime:

“A Bedouin came to the prophet, grabbed the stirrup of his camel and said: O the messenger of God! Teach me something to go to heaven with it. Prophet said: “As you would have people do to you, do to them; and what you dislike to be done to you, don’t do to them. Now let the stirrup go!” [This maxim is enough for you; go and act in accordance with it!]”

“None of you [truly] believes until he wishes for his brother what he wishes for himself.”

“Seek for mankind that of which you are desirous for yourself, that you may be a believer.”

“That which you want for yourself, seek for mankind.”[41]

“The most righteous person is the one who consents for other people what he consents for himself, and who dislikes for them what he dislikes for himself.”[41]

Ali ibn Abi Talib (4th Caliph in Sunni Islam, and first Imam in Shia Islam) says:

“O’ my child, make yourself the measure (for dealings) between you and others. Thus, you should desire for others what you desire for yourself and hate for others what you hate for yourself. Do not oppress as you do not like to be oppressed. Do good to others as you would like good to be done to you. Regard bad for yourself whatever you regard bad for others. Accept that (treatment) from others which you would like others to accept from you… Do not say to others what you do not like to be said to you.”

The Writings of the Bah’ Faith encourages everyone to treat others as they would treat themselves and even prefer others over oneself:

O SON OF MAN! Deny not My servant should he ask anything from thee, for his face is My face; be then abashed before Me.

Blessed is he who preferreth his brother before himself.

And if thine eyes be turned towards justice, choose thou for thy neighbour that which thou choosest for thyself.

Ascribe not to any soul that which thou wouldst not have ascribed to thee, and say not that which thou doest not.

One should never do that to another which one regards as injurious to ones own self. This, in brief, is the rule of dharma. Other behavior is due to selfish desires.

By making dharma (right conduct) your main focus, treat others as you treat yourself[52]

Also,

Buddha (Siddhartha Gautama, c. 623543 BC)[53][54] made this principle one of the cornerstones of his ethics in the 6th century BC. It occurs in many places and in many forms throughout the Tripitaka.

Comparing oneself to others in such terms as “Just as I am so are they, just as they are so am I,” he should neither kill nor cause others to kill.

One who, while himself seeking happiness, oppresses with violence other beings who also desire happiness, will not attain happiness hereafter.

Hurt not others in ways that you yourself would find hurtful.

Putting oneself in the place of another, one should not kill nor cause another to kill.[55]

The Golden Rule is paramount in the Jainist philosophy and can be seen in the doctrines of Ahimsa and Karma. As part of the prohibition of causing any living beings to suffer, Jainism forbids inflicting upon others what is harmful to oneself.

The following quotation from the Acaranga Sutra sums up the philosophy of Jainism:

Nothing which breathes, which exists, which lives, or which has essence or potential of life, should be destroyed or ruled over, or subjugated, or harmed, or denied of its essence or potential.

In support of this Truth, I ask you a question “Is sorrow or pain desirable to you?” If you say “yes it is”, it would be a lie. If you say, “No, It is not” you will be expressing the truth. Just as sorrow or pain is not desirable to you, so it is to all which breathe, exist, live or have any essence of life. To you and all, it is undesirable, and painful, and repugnant.[56]

A man should wander about treating all creatures as he himself would be treated.

Sutrakritanga, 1.11.33

In happiness and suffering, in joy and grief, we should regard all creatures as we regard our own self.

Lord Mahavira, 24th Tirthankara

Saman Suttam of Jinendra Varni[57] gives further insight into this precept:-

Just as pain is not agreeable to you, it is so with others. Knowing this principle of equality treat other with respect and compassion.

Suman Suttam, verse 150

Killing a living being is killing one’s own self; showing compassion to a living being is showing compassion to oneself. He who desires his own good, should avoid causing any harm to a living being.

Suman Suttam, verse 151

Precious like jewels are the minds of all. To hurt them is not at all good. If thou desirest thy Beloved, then hurt thou not anyone’s heart.

Guru Arjan Dev Ji 259, Guru Granth Sahib

The same idea is also presented in V.12 and VI.30 of the Analects (c. 500 BC), which can be found in the online Chinese Text Project. The phraseology differs from the Christian version of the Golden Rule. It does not presume to do anything unto others, but merely to avoid doing what would be harmful. It does not preclude doing good deeds and taking moral positions, but there is slim possibility for a Confucian missionary outlook, such as one can justify with the Christian Golden Rule.

The sage has no interest of his own, but takes the interests of the people as his own. He is kind to the kind; he is also kind to the unkind: for Virtue is kind. He is faithful to the faithful; he is also faithful to the unfaithful: for Virtue is faithful.

Regard your neighbor’s gain as your own gain, and your neighbor’s loss as your own loss.

If people regarded other peoples states in the same way that they regard their own, who then would incite their own state to attack that of another? For one would do for others as one would do for oneself. If people regarded other peoples cities in the same way that they regard their own, who then would incite their own city to attack that of another? For one would do for others as one would do for oneself. If people regarded other peoples families in the same way that they regard their own, who then would incite their own family to attack that of another? For one would do for others as one would do for oneself. And so if states and cities do not attack one another and families do not wreak havoc upon and steal from one another, would this be a harm to the world or a benefit? Of course one must say it is a benefit to the world.

Mozi regarded the golden rule as a corollary to the cardinal virtue of impartiality, and encouraged egalitarianism and selflessness in relationships.

Do not do unto others whatever is injurious to yourself. — Shayast-na-Shayast 13.29

Here ye these words and heed them well, the words of Dea, thy Mother Goddess, “I command thee thus, O children of the Earth, that that which ye deem harmful unto thyself, the very same shall ye be forbidden from doing unto another, for violence and hatred give rise to the same. My command is thus, that ye shall return all violence and hatred with peacefulness and love, for my Law is love unto all things. Only through love shall ye have peace; yea and verily, only peace and love will cure the world, and subdue all evil.”

The Way to Happiness expresses the Golden Rule both in its negative/prohibitive form and in its positive form. The negative/prohibitive form is expressed in Precept 19 as:

19. Try not to do things to others that you would not like them to do to you.

The positive form is expressed in Precept 20 as:

20. Try to treat others as you would want them to treat you.

The “Declaration Toward a Global Ethic”[64] from the Parliament of the Worlds Religions[65][66] (1993) proclaimed the Golden Rule (“We must treat others as we wish others to treat us”) as the common principle for many religions.[67] The Initial Declaration was signed by 143 leaders from all of the world’s major faiths, including Baha’i Faith, Brahmanism, Brahma Kumaris, Buddhism, Christianity, Hinduism, Indigenous, Interfaith, Islam, Jainism, Judaism, Native American, Neo-Pagan, Sikhism, Taoism, Theosophist, Unitarian Universalist and Zoroastrian.[67][68] In the folklore of several cultures the Golden Rule is depicted by the allegory of the long spoons.

Many different sources claim the Golden Rule as a humanist principle:[69][70]

Trying to live according to the Golden Rule means trying to empathise with other people, including those who may be very different from us. Empathy is at the root of kindness, compassion, understanding and respect qualities that we all appreciate being shown, whoever we are, whatever we think and wherever we come from. And although it isnt possible to know what it really feels like to be a different person or live in different circumstances and have different life experiences, it isnt difficult for most of us to imagine what would cause us suffering and to try to avoid causing suffering to others. For this reason many people find the Golden Rules corollary “do not treat people in a way you would not wish to be treated yourself” more pragmatic.[69]

Visit link:

Golden Rule – Wikipedia

Versions of the Golden Rule in dozens of religions and …

Religious informationMenu

A photoshopped “Golden Rule Bus”

This bus image was altered to display “The Golden Rule” on its front.The side of the bus was photoshopped to contain the upper part of Scarboro Missions’ Golden Rule poster, which is shown below

Linking the Golden Rule to the “Sheep and Goats” passage, Matthew 25:32-46

A statement by Gautama Buddha, the founder of Buddhism, which is the fifth largest world religion after Christianity, Islam, Hinduism, and Chinese traditional religion:

“Resolve to be tender with the young,compassionate with the aged,sympatheitic with the striving.and tolerant with the weak and wrong.

Sometime in your life, you will have been all of these.” 2

The core beliefs of every religion

3

Sponsored link

The Ethic of Reciprocity — often called the Golden Rule — simply states that all of us are to treat other people as we would wish other people to treat us in return.

On April 5 each year, the International Golden Rule Day will be observed as a global virtual celebration. Before 2018’s celebration the web site https:www.goldenruleday.org announced:

“Join us on Thursday, April 5, for a 24-hour global virtual celebration of the Golden Rule; a universal principle shared by nearly all cultural, spiritual, religious, and secular traditions on Earth.

Over the course of 24 hours, people from many corners of the world will address Why the Golden Rule Matters Now as they share how people, organizations and governments can use this Common Principle to create a better world for everyone.

Join us and experience conversations, music, stories, and art inspired by the Golden Rule. Learn new ways to apply the Golden Rule in your life and community.”4

Almost all organized religions, philosophical systems, and secular systems of morality include such an ethic. It is normally intended to apply to the entire human race. Unfortunately, it is too often applied by some people only to believers in the same religion or even to others in the same denomination, of the same gender, the same sexual orientation, etc.

Go here to see the original:

Versions of the Golden Rule in dozens of religions and …

Golden Rule Funeral Homes

Our members are independently owned and operated funeral homes dedicated to exceptional service.

Founded in 1928, OGR’s mission is to make independent funeral homes exceptional. We do this by building and supporting member interaction, information exchange and professional business development through a wide range of programs, services and resources. Our Standards of Ethical Conductguide our members’ business practices and philosophy, allowing them to provide unsurpassed care to families “by the Golden Rule.”

Link:

Golden Rule Funeral Homes

Golden Rule (fiscal policy) – Wikipedia

The Golden Rule is a guideline for the operation of fiscal policy. The Golden Rule states that over the economic cycle, the Government will borrow only to invest and not to fund current spending. In layman’s terms this means that on average over the ups and downs of an economic cycle the government should only borrow to pay for investment that benefits future generations. Day-to-day spending that benefits today’s taxpayers should be paid for with today’s taxes, not with leveraged investment. Therefore, over the cycle the current budget (i.e., net of investment) must balance or be brought into surplus.

The core of the ‘golden rule’ framework is that, as a general rule, policy should be designed to maintain a stable allocation of public sector resources over the course of the business cycle. Stability is defined in terms of the following ratios:

If national income is growing, and net worth is positive this rule implies that, on average, there should be net surplus of income over expenditure.

The justification for the Golden Rule derives from macroeconomic theory. Other things being equal, an increase in government borrowing raises the real interest rate consequently crowding out (reducing) investment because a higher rate of return is required for investment to be profitable. Unless the government uses the borrowed funds to invest in projects with a similar rate of return to private investment, capital accumulation falls, with negative consequences upon economic growth.

The Golden Rule was one of several fiscal policy principles set out by the incoming Labour government in 1997. These were first set out by then Chancellor of the Exchequer Gordon Brown in his 1997 budget speech. Subsequently they were formalised in the Finance Act 1998 and in the Code for Fiscal Stability, approved by the House of Commons in December 1998.

In 2005 there was speculation that the Chancellor had manipulated these rules as the treasury had moved the reference frame for the start of the economic cycle to two years earlier (from 1999 to 1997). The implications of this are to allow for 18billion – 22billion more of borrowing.[1]

The Government’s other fiscal rule is the Sustainable investment rule, which requires it to keep debt at a “prudent level”. This is currently set at below 40% of GDP in each year of the current cycle.

As of 2009, the Golden rule has been abandoned.

In France, the lower house of parliament voted in favour of reforming articles 32, 39 and 42 of the French constitution on 12 July 2011.[2] In order to come into force the amendments need to be passed by a 3/5 majority of the combined upper and lower houses (Congress).

In 2009 articles 109, 115 and 143 of Germany’s constitution were amended to introduce the Schuldenbremse (“debt brake”), a balanced budget provision.[3] The reform will come into effect in 2016 for the state and 2020 for the regions.

On 7 September 2011, the Spanish Senate approved an amendment to article 135 of the Spanish constitution introducing a cap on the structural deficit of the state (national, regional and municipal).[4] The amendment will come into force from 2020.

On 7 September 2011, the Italian Lower House approved a constitutional reform introducing a balanced budget obligation[5] to Article 81 of the Italian constitution. The rule will come into effect in 2014. That reform is rooted in the European Stability and Growth Pact and in the s.c. fiscal compact. It has led to the abandonment of the ideological neutrality that characterized the Italian fiscal constitution in favor of a cleary neoclassical inspiration[6].

Link:

Golden Rule (fiscal policy) – Wikipedia

Rationalism | Britannica.com

Rationalism, in Western philosophy, the view that regards reason as the chief source and test of knowledge. Holding that reality itself has an inherently logical structure, the rationalist asserts that a class of truths exists that the intellect can grasp directly. There are, according to the rationalists, certain rational principlesespecially in logic and mathematics, and even in ethics and metaphysicsthat are so fundamental that to deny them is to fall into contradiction. The rationalists confidence in reason and proof tends, therefore, to detract from their respect for other ways of knowing.

Rationalism has long been the rival of empiricism, the doctrine that all knowledge comes from, and must be tested by, sense experience. As against this doctrine, rationalism holds reason to be a faculty that can lay hold of truths beyond the reach of sense perception, both in certainty and generality. In stressing the existence of a natural light, rationalism has also been the rival of systems claiming esoteric knowledge, whether from mystical experience, revelation, or intuition, and has been opposed to various irrationalisms that tend to stress the biological, the emotional or volitional, the unconscious, or the existential at the expense of the rational.

Rationalism has somewhat different meanings in different fields, depending upon the kind of theory to which it is opposed.

In the psychology of perception, for example, rationalism is in a sense opposed to the genetic psychology of the Swiss scholar Jean Piaget (18961980), who, exploring the development of thought and behaviour in the infant, argued that the categories of the mind develop only through the infants experience in concourse with the world. Similarly, rationalism is opposed to transactionalism, a point of view in psychology according to which human perceptual skills are achievements, accomplished through actions performed in response to an active environment. On this view, the experimental claim is made that perception is conditioned by probability judgments formed on the basis of earlier actions performed in similar situations. As a corrective to these sweeping claims, the rationalist defends a nativism, which holds that certain perceptual and conceptual capacities are innateas suggested in the case of depth perception by experiments with the visual cliff, which, though platformed over with firm glass, the infant perceives as hazardousthough these native capacities may at times lie dormant until the appropriate conditions for their emergence arise.

In the comparative study of languages, a similar nativism was developed in the 1950s by the innovating syntactician Noam Chomsky, who, acknowledging a debt to Ren Descartes (15961650), explicitly accepted the rationalistic doctrine of innate ideas. Though the thousands of languages spoken in the world differ greatly in sounds and symbols, they sufficiently resemble each other in syntax to suggest that there is a schema of universal grammar determined by innate presettings in the human mind itself. These presettings, which have their basis in the brain, set the pattern for all experience, fix the rules for the formation of meaningful sentences, and explain why languages are readily translatable into one another. It should be added that what rationalists have held about innate ideas is not that some ideas are full-fledged at birth but only that the grasp of certain connections and self-evident principles, when it comes, is due to inborn powers of insight rather than to learning by experience.

Common to all forms of speculative rationalism is the belief that the world is a rationally ordered whole, the parts of which are linked by logical necessity and the structure of which is therefore intelligible. Thus, in metaphysics it is opposed to the view that reality is a disjointed aggregate of incoherent bits and is thus opaque to reason. In particular, it is opposed to the logical atomisms of such thinkers as David Hume (171176) and the early Ludwig Wittgenstein (18891951), who held that facts are so disconnected that any fact might well have been different from what it is without entailing a change in any other fact. Rationalists have differed, however, with regard to the closeness and completeness with which the facts are bound together. At the lowest level, they have all believed that the law of contradiction A and not-A cannot coexist holds for the real world, which means that every truth is consistent with every other; at the highest level, they have held that all facts go beyond consistency to a positive coherence; i.e., they are so bound up with each other that none could be different without all being different.

In the field where its claims are clearestin epistemology, or theory of knowledgerationalism holds that at least some human knowledge is gained through a priori (prior to experience), or rational, insight as distinct from sense experience, which too often provides a confused and merely tentative approach. In the debate between empiricism and rationalism, empiricists hold the simpler and more sweeping position, the Humean claim that all knowledge of fact stems from perception. Rationalists, on the contrary, urge that some, though not all, knowledge arises through direct apprehension by the intellect. What the intellectual faculty apprehends is objects that transcend sense experienceuniversals and their relations. A universal is an abstraction, a characteristic that may reappear in various instances: the number three, for example, or the triangularity that all triangles have in common. Though these cannot be seen, heard, or felt, rationalists point out that humans can plainly think about them and about their relations. This kind of knowledge, which includes the whole of logic and mathematics as well as fragmentary insights in many other fields, is, in the rationalist view, the most important and certain knowledge that the mind can achieve. Such a priori knowledge is both necessary (i.e., it cannot be conceived as otherwise) and universal, in the sense that it admits of no exceptions. In the critical philosophy of Immanuel Kant (17241804), epistemological rationalism finds expression in the claim that the mind imposes its own inherent categories or forms upon incipient experience (see below Epistemological rationalism in modern philosophies).

In ethics, rationalism holds the position that reason, rather than feeling, custom, or authority, is the ultimate court of appeal in judging good and bad, right and wrong. Among major thinkers, the most notable representative of rational ethics is Kant, who held that the way to judge an act is to check its self-consistency as apprehended by the intellect: to note, first, what it is essentially, or in principlea lie, for example, or a theftand then to ask if one can consistently will that the principle be made universal. Is theft, then, right? The answer must be No, because, if theft were generally approved, peoples property would not be their own as opposed to anyone elses, and theft would then become meaningless; the notion, if universalized, would thus destroy itself, as reason by itself is sufficient to show.

In religion, rationalism commonly means that all human knowledge comes through the use of natural faculties, without the aid of supernatural revelation. Reason is here used in a broader sense, referring to human cognitive powers generally, as opposed to supernatural grace or faiththough it is also in sharp contrast to so-called existential approaches to truth. Reason, for the rationalist, thus stands opposed to many of the religions of the world, including Christianity, which have held that the divine has revealed itself through inspired persons or writings and which have required, at times, that its claims be accepted as infallible, even when they do not accord with natural knowledge. Religious rationalists hold, on the other hand, that if the clear insights of human reason must be set aside in favour of alleged revelation, then human thought is everywhere rendered suspecteven in the reasonings of the theologians themselves. There cannot be two ultimately different ways of warranting truth, they assert; hence rationalism urges that reason, with its standard of consistency, must be the final court of appeal. Religious rationalism can reflect either a traditional piety, when endeavouring to display the alleged sweet reasonableness of religion, or an antiauthoritarian temper, when aiming to supplant religion with the goddess of reason.

Visit link:

Rationalism | Britannica.com

Rationalism (architecture) – Wikipedia

In architecture, rationalism is an architectural current which mostly developed from Italy in the 1920s-1930s. Vitruvius had claimed in his work De Architectura that architecture is a science that can be comprehended rationally. This formulation was taken up and further developed in the architectural treatises of the Renaissance. Progressive art theory of the 18th-century opposed the Baroque use of illusionism with the classic beauty of truth and reason.

Twentieth-century rationalism derived less from a special, unified theoretical work than from a common belief that the most varied problems posed by the real world could be resolved by reason. In that respect it represented a reaction to historicism and a contrast to Art Nouveau and Expressionism.

The name rationalism is retroactively applied to a movement in architecture that came about during the Enlightenment (more specifically, neoclassicism), arguing that architecture’s intellectual base is primarily in science as opposed to reverence for and emulation of archaic traditions and beliefs. Rational architects, following the philosophy of Ren Descartes emphasized geometric forms and ideal proportions.[1]:8184

The French Louis XVI style (better known as Neoclassicism) emerged in the mid-18th century with its roots in the waning interest of the Baroque period. The architectural notions of the time gravitated more and more to the belief that reason and natural forms are tied closely together, and that the rationality of science should serve as the basis for where structural members should be placed. Towards the end of the 18th century, Jean-Nicolas-Louis Durand, a teacher at the influential cole Polytechnique in Paris at the time, argued that architecture in its entirety was based in science.

Other architectural theorists of the period who advanced rationalist ideas include Abb Jean-Louis de Cordemoy (16311713),[2]:559[3]:265 the Venetian Carlo Lodoli (16901761),[2]:560 Abb Marc-Antoine Laugier (17131769) and Quatremre de Quincy (17551849).[1]:8792

The architecture of Claude Nicholas Ledoux (17361806) and tienne-Louis Boulle (172899) typify Enlightenment rationalism, with their use of pure geometric forms, including spheres, squares, and cylinders.[1]:9296

The term structural rationalism most often refers to a 19th-century French movement, usually associated with the theorists Eugne Viollet-le-Duc and Auguste Choisy. Viollet-le-Duc rejected the concept of an ideal architecture and instead saw architecture as a rational construction approach defined by the materials and purpose of the structure. The architect Eugne Train was one of the most important practitioners of this school, particularly with his educational buildings such as the Collge Chaptal and Lyce Voltaire.[4]

Architects such as Henri Labrouste and Auguste Perret incorporated the virtues of structural rationalism throughout the 19th century in their buildings. By the early 20th century, architects such as Hendrik Petrus Berlage were exploring the idea that structure itself could create space without the need for decoration. This gave rise to modernism, which further explored this concept. More specifically, the Soviet Modernist group ASNOVA were known as ‘the Rationalists’.

Rational Architecture (Italian: Architettura razionale) thrived in Italy from the 1920s to the 1940s. In 1926, a group of young architects Sebastiano Larco, Guido Frette, Carlo Enrico Rava, Adalberto Libera, Luigi Figini, Gino Pollini, and Giuseppe Terragni (190443) founded the so-called Gruppo 7, publishing their manifesto in the magazine Rassegna Italiana. Their declared intent was to strike a middle ground between the classicism of the Novecento Italiano movement and the industrially inspired architecture of Futurism.[5]:203 Their “note” declared:

The hallmark of the earlier avant garde was a contrived impetus and a vain, destructive fury, mingling good and bad elements: the hallmark of today’s youth is a desire for lucidity and wisdom…This must be clear…we do not intend to break with tradition…The new architecture, the true architecture, should be the result of a close association between logic and rationality.[5]:203

One of the first rationalist buildings was the Palazzo Gualino in Turin, built for the financier Riccardo Gualino by the architects Gino Levi-Montalcini and Giuseppe Pagano.[6] Gruppo 7 mounted three exhibitions between 1926 and 1931, and the movement constituted itself as an official body, the Movimento Italiano per l’Architettura Razionale (MIAR), in 1930. Exemplary works include Giuseppe Terragni’s Casa del Fascio in Como (193236), The Medaglia d’Oro room at the Italian Aeronautical Show in Milan (1934) by Pagano and Marcello Nizzoli, and the Fascist Trades Union Building in Como (193843), designed by Cesare Cattaneo, Pietro Lingeri, Augusto Magnani, L. Origoni, and Mario Terragni.[5]:2059

Pagano became editor of Casabella in 1933 together with Edoardo Persico. Pagano and Persico featured the work of the rationalists in the magazine, and its editorials urged the Italian state to adopt rationalism as its official style. The Rationalists enjoyed some official commissions from the Fascist government of Benito Mussolini, but the state tended to favor the more classically inspired work of the National Union of Architects. Architects associated with the movement collaborated on large official projects of the Mussolini regime, including the University of Rome (begun in 1932) and the Esposizione Universale Roma (EUR) in the southern part of Rome (begun in 1936). The EUR features monumental buildings, many of which evocative of ancient Roman architecture, but absent ornament, revealing strong geometric forms.[5]:2047

In the late 1960s, a new rationalist movement emerged in architecture, claiming inspiration from both the Enlightenment and early-20th-century rationalists. Like the earlier rationalists, the movement, known as the Tendenza, was centered in Italy. Practitioners include Carlo Aymonino (19262010), Aldo Rossi (193197), and Giorgio Grassi. The Italian design magazine Casabella featured the work of these architects and theorists. The work of architectural historian Manfredo Tafuri influenced the movement, and the University Iuav of Venice emerged as a center of the Tendenza after Tafuri became chair of Architecture History in 1968.[1]:157 et seq. A Tendenza exhibition was organized for the 1973 Milan Triennale.[1]:178183

Rossi’s book L’architettura della citt, published in 1966, and translated into English as The Architecture of the City in 1982, explored several of the ideas that inform Neo-rationalism. In seeking to develop an understanding of the city beyond simple functionalism, Rossi revives the idea of typology, following from Quatremre de Quincy, as a method for understanding buildings, as well as the larger city. He also writes of the importance of monuments as expressions of the collective memory of the city, and the idea of place as an expression of both physical reality and history.[1]:16672[7]:17880

Architects such as Leon Krier, Maurice Culot, and Demetri Porphyrios took Rossi’s ideas to their logical conclusion with a revival of Classical Architecture and Traditional Urbanism. Krier’s witty critique of Modernism, often in the form of cartoons, and Porphyrios’s well crafted philosophical arguments, such as “Classicism is not a Style”, won over a small but talented group of architects to the classical point of view. Organizations such as the Traditional Architecture Group at the RIBA, and the Institute of Classical Architecture attest to their growing number, but mask the Rationalist origins.

In Germany, Oswald Mathias Ungers became the leading practitioner of German rationalism from the mid-1960s.[7]:17880 Ungers influenced a younger generation of German architects, including Hans Kollhoff, Max Dudler, and Christoph Mckler.[8]

Originally posted here:

Rationalism (architecture) – Wikipedia

critical rationalism blog – An exploration of critical …

CHAPTER 3 of my thesis Aspects of the Duhem Problem.

The previous chapter concluded with an account of the attempt by Lakatos to retrieve the salient features of falsificationism while accounting for the fact that a research programme may proceed in the face of numerous difficulties, just provided that there is occasional success. His methodology exploits the ambiguity of refutation (the Duhem-Quine problem) to permit a programme to proceed despite seemingly adverse evidence. According to a strict or naive interpretation of falsificationism, adverse evidence should cause the offending theory to be ditched forthwith but of course the point of the Duhem-Quine problem is that we do not know which among the major theory and auxiliary assumptions is at fault. The Lakatos scheme also exploits what is claimed to be an asymmetry in the impact of confirmations and refutations.

The Bayesians offer an explanation and a justification for Lakatos; at the same time they offer a possible solution to the Duhem-Quine problem. The Bayesian enterprise did not set out specifically to solve these problems because Bayesianism offers a comprehensive theory of scientific reasoning. However these are the kind of problems that such a comprehensive theory would be required to solve.

Howson and Ubrach, well-regarded and influential exponents of the Bayesian approach, provide an excellent all-round exposition and spirited polemics in defence of the Bayesian system in Scientific Reasoning: The Bayesian Approach (1989). In a nutshell, Bayesianism takes its point of departure from the fact that scientists tend to have degrees of belief in their theories and these degrees of belief obey the probability calculus. Or if their degrees of belief do not obey the calculus, then they should, in order to achieve rationality. According to Howson and Urbach probabilities should be understood as subjective assessments of credibility, regulated by the requirements that they be overall consistent (ibid 39).

They begin with some comments on the history of probability theory, starting with the Classical Theory, pioneered by Laplace. The classical theory aimed to provide a foundation for gamblers in their calculations of odds in betting, and also for philosophers and scientists to establish grounds of belief in the validity of inductive inference. The seminal book by Laplace was Philosophical Essays on Probabilities (1820) and the leading modern exponents of the Classical Theory have been Keynes and Carnap.

Objectivity is an important feature of the probabilities in the classical theory. They arise from a mathematical relationship between propositions and evidence, hence they are not supposed to depend on any subjective element of appraisal or perception. Carnaps quest for a principle of induction to establish the objective probability of scientific laws foundered on the fact that these laws had to be universal statements, applicable to an infinite domain. Thus no finite body of evidence could ever raise the probability of a law above zero (e divided by infinity is zero).

The Bayesian scheme does not depend on the estimation of objective probabilities in the first instance. The Bayesians start with the probabilities that are assigned to theories by scientists. There is a serious bone of contention among the Bayesians regarding the way that probabilities are assigned, whether they are a matter of subjective belief as argued by Howson and Urbach ( belief Bayesians) or a matter of behaviour, specifically betting behaviour (betting Bayesians).

The purpose of the Bayesian system is to explain the characteristic features of scientific inference in terms of the probabilites of the various rival hypotheses under consideration, relative to the available evidence, in particular the most recent evidence.

BAYESS THEOREM

Bayess Theorem can be written as follows:

P(h!e) = P(e!h)P(h) where P(h), and P(e) > 0P(e)

In this situation we are interested in the credibility of the hypothesis h relative to empirical evidence e. That is, the posterior probability, in the light of the evidence. Written in the above form the theorem states that the probability of the hypothesis conditional on the evidence (the posterior probability of the hypothesis) is equal to the probability of the evidence conditional on the hypothesis multiplied by the probability of the hypothesis in the absence of the evidence (the prior probability), all divided by the probability of the evidence.

Thus:

e confirms or supports h when P(h!e) > P(h)e disconfirms or undermines h when P(h!e)

The prior probability of h, designated as P(h) is that before e is considered. This will often be before e is available, but the system is still supposed to work when the evidence is in hand. In this case it has to be left out of account in evaluating the prior probability of the hypothesis. The posterior probability P(h!e) is that after e is admitted into consideration.

As Bayess Theorem shows, we can relate the posterior probability of a hypothesis to the terms P(h), P(e!h) and P(e). If we know the value of these three terms we can determine whether e confirms h, and more to the point, calculate P(h!e).

The capacity of the Bayesian scheme to provide a solution to the Duhem-Quine problem will be appraised in the light of two examples.

CASE 1. DORLING ON THE ACCELERATION OF THE MOON

Dorling (1979) provides an important case study, bearing directly on the Duhem-Quine problem in a paper titled Bayesian Personalism, the Methodology of Scientific Research Programmes, and Duhems Problem. He is concerned with two issues which arise from the work of Lakatos and one of these is intimately related to the Duhem-Quine problem.

1(a) Can a theory survive despite empirical refutation? How can the arrow of modus tollens be diverted from the theory to some auxiliary hypothesis? This is essentially the Duhem-Quine problem and it raises the closely related question;

1(b) Can we decide on some rational and empirical grounds whether the arrow of modus tollens should point at a (possibly) refuted theory or at (possibly) refuted auxiliaries?

2. How are we to account for the different weights that are assigned to confirmations and refutations?

In the history of physics and astronomy, successful precise quantitative predictions seem often to have been regarded as great triumphs when apparently similar unsuccessful predictions were regarded not as major disasters but as minor discrepancies. (Dorling, 1979, 177).

The case history concerns a clash between the observed acceleration of the moon and the calculated acceleration based on a hard core of Newtonian theory (T) and an essential auxiliary hypothesis (H) that the effects of tidal friction are too small to influence lunar acceleration. The aim is to evaluate T and H in the light of new and unexpected evidence (E) which was not consistent with them.

For the situation prior to the evidence E Dorling ascribed a probability of 0.9 to Newtonian theory (T) and 0.6 to the auxiliary hypothesis (H). He pointed out that the precise numbers do not matter all that much; we simply had one theory that was highly regarded, with subjective probability approaching 1 and another which was plausible but not nearly so strongly held.

The next step is to calculate the impact of the new evidence E on the subjective probabilities of T and H. This is done by calculating (by the Bayesian calculus) their posterior probabilities (after E) for comparison with the prior probabilities (0.9 and 0.6). One might expect that the unfavourable evidence would lower both by a similar amount, or at least a similar proportion.

Dorling explained that some other probabilities have to be assigned or calculated to feed into the Bayesian formula. Eventually we find that the probability of T has hardly shifted (down by 0.0024 to 0.8976) while in striking contrast the probability of H has collapsed by 0.597 to 0.003. According to Dorling this accords with scientific perceptions at the time and it supports the claim by Lakatos that a vigorous programme can survive refutations provided that it provides opportunities for further work and has some success. Newtonian theory would have easily survived this particular refutation because on the arithmetic its subjective probability scarcely changed.

This case is doubly valuable for the evaluation of Lakatos because by a historical accident it provided an example of a confirmation as well as a refutation. For a time it was believed that the evidence E supported Newton but subsequent work revealed that there had been an error in the calculations. The point is that before the error emerged, the apparent confirmation of T and H had been treated as a great triumph for the Newtonian programme. And of course we can run the Bayesian calculus, as though E had confirmed T and H, to find what the impact of the apparent confirmation would have been on their posterior probabilities. Their probabilities in this case increased to 0.996 and 0.964 respectively and Dorling uses this result to provide support for the claim that there is a powerfully asymmetrical effect on T between the refutation and the confirmation. He regards the decrease in P from 0.9 to 0.8976 as negligible while the increase to 0.996 represents a fall in the probability of error from 1/10 to 4/1000.

Thus the evidence has more impact in support than it has in opposition, a result from Bayes that agrees with Lakatos.

This latest result strongly suggests that a theory ought to be able to withstand a long succession of refutations of this sort, punctuated only by an occasional confirmation, and its subjective probability still steadily increase on average (Dorling, 1979, 186).

As to the relevance to Duhem-Quine problem; the task is to pick between H and T. In this instance the substantial reduction in P(H) would indicate that the H, the auxiliary hypothesis, is the weak link rather than the hard core of Newtonian theory.

CASE 2. HOWSON AND URBACH ON PROUTS LAW

The point of this example (used by Lakatos himself) is to show how a theory which appears to be refuted by evidence can survive as an active force for further development, being regarded more highly than the confounding evidence. When this happens, the Duhem-Quine problem is apparently again resolved in favour of the theory.

In 1815 William Prout suggested that hydrogen was a building block of other elements whose atomic weights were all multiples of the atomic weight of hydrogen. The fit was not exact, for example boron had a value of 0.829 when according to the theory it should have been 0.875 (a multiple of the figure 0.125). The measured figure for chlorine was 35.83 instead of 36. To overcome these discrepancies Prout and Thompson suggested that the values should be adjusted to fit the theory, with the deviations explained in terms of experimental error. In this case the arrow of modus tollens was directed from the theory to the experimental techniques.

In setting the scene for use of Bayesian theory, Howson and Urbach designated Prouts hypothesis as t. They refer to a as the hypothesis that the accuracy of measurements was adequate to produce an exact figure. The troublesome evidence is labelled e.

It seems that chemists of the early nineteenth century, such as Prout and Thompson, were fairly certain about the truth of t, but less so of a, though more sure that a is true than that it is false. (ibid, page 98)

In other words they were reasonably happy with their methods and the purity of their chemicals while accepting that they were not perfect.

Feeding in various estimates of the relevant prior probabilities, the effect was to shift from the prior probabilities to the posterior probabilities listed as follows:

P(t) = 0.9 shifted to P(t!e) = 0.878 (down 0.022)P(a) = 0.6 shifted to P(a!e) = 0.073 (down 0.527)

Howson and Urbach argued that these results explain why it was rational for Prout and Thomson to persist with Prouts hypothesis and to adjust atomic weight measurements to come into line with it. In other words, the arrow of modus tollens is validly directed to a and not t.

Howson and Urbach noted that the results are robust and are not seriously affected by altered initial probabilities: for example if P(t) is changed from 0.9 to 0.7 the posterior probabilities of t and a are 0.65 and 0.21 respectively, still ranking t well above a (though only by a factor of 3 rather than a factor of 10).

In the light of the calculation they noted Prouts hypothesis is still more likely to be true than false, and the auxiliary assumptions are still much more likely to be false than true (ibid 101). Their use of language was a little unfortunate because we now know that Prout was wrong and so Howson and Urbach would have done better to speak of credibility or likelihood instead of truth. Indeed, as will be explained, there were dissenting voices at the time.

REVIEW OF THE BAYESIAN APPROACH

Bayesian theory has many admirers, none more so than Howson and Urbach. In their view, the Bayesian approach should become dominant in the philosophy of science, and it should be taken on board by scientists as well. Confronted with evidence from research by Kahneman and Tversky that in his evaluation of evidence, man is apparently not a conservative Bayesian: he is not a Bayesian at all (Kahneman and Tversky, 1972, cited in Howson and Urbach, 1989, 293) they reply that:

it is not prejudicial to the conjecture that what we ourselves take to be correct inductive reasoning is Bayesian in character that there should be observable and sometimes systematic deviations from Bayesian preceptswe should be surprised if on every occasion subjects were apparently to employ impeccable Bayesian reasoning, even in the circumstances that they themselves were to regard Bayesian procedures as canonical. It is, after all, human to err. (Howson and Urbach, 1989, 293-285)

They draw some consolation from the lamentable performance of undergraduates (and a distressing fraction of logicians) in a simple deductive task (page 294). The task is to nominate which of four cards should be turned over to test the statement if a card has a vowel on one side, then it has an even number on the other side. The visible faces of the four cards are E, K, 4 and 7. The most common answers are the pair E and 4 or 4 alone. The correct answer is e and 7.

The Bayesian approach has some features that give offence to many people. Some object to the subjective elements, some to the arithmetic and some to the concept of probability which was so tarnished by the debacle of Carnaps programme.

Taking the last point first, Howson and Urbach argue cogently that the Bayesian approach should not be subjected to prejudice due to the failure of the classical theory of objective probabilities. The distinctively subjective starting point for the Bayesian calculus of course raises the objection of excessive subjectivism, with the possibility of irrational or arbitrary judgements. To this, Howson and Urbach reply that the structure of argument and calculation that follows after the assignment of prior probabilities resembles the objectivity of deductive inference (including mathematical calculation) from a set of premises. The source of the premises does not detract from the objectivity of the subsequent manipulations that may be performed upon them. Thus Bayesian subjectivism is not inherently more subjective than deductive reasoning.

EXCESSIVE REFLECTION OF THE INPUT

The input consists of prior probabilities (whether beliefs or betting propensities) and this raises another objection, along the lines that the Bayesians emerge with a conclusion (the posterior probability) which overwhelmingly reflects what was fed in, namely the prior probability. Against this is the argument that the prior probability (whatever it is) will shift rapidly towards a figure that reflects the impact of the evidence. Thus any arbitrariness or eccentricity of original beliefs will be rapidly corrected in a rational manner. The same mechanisms is supposed to result in rapid convergence between the belief values of different scientists.

To stand up, this latter argument must demonstrate that convergence cannot be equally rapidly achieved by non-Bayesian methods, such as offering a piece of evidence and discussing its implications for the various competing hypotheses or the alternative lines of work without recourse to Bayesian calculations.

As was noted previously, there is a considerable difference of opinion in Bayesian circles about the measure of subjective belief. Some want to use a behavioural measure (actual betting, or propensity to bet), others including Howson and Urbach opt for belief rather than behaviour. The betting Bayseians need to answer the question what, in scientific practice, is equivalent to betting? Is the notion of betting itself really relevant to the scientists situation? Betting forces a decision (or the bet does not get placed) but scientists can in principle refrain from a firm decision for ever (for good reasons or bad). This brings us back to the problems created by the demand to take a stand or make a decision one way or the other. Even if some kind of behavioural equivalent of betting is invoked, such as working on a particular programme or writing papers related to the programme, there is still the kind of problem, noted below, where a scientist works on a theory which he or she believes to be false.

Similarly formidable problems confront the belief Bayesians. Obviously any retrospective attribution of belief (as in the cases above) calls for heroic assumptions about the consciousness of people long dead. These assumptions expose the limitation with the forced choice approach which attempts to collapse all the criteria for the decision into a single value. Such an approach (for both betting and belief Bayesians) seems to preclude a complex appraisal of the theoretical problem situation which might be based on multiple criteria. Such an appraisal might run along the lines that theory A is better than theory B in solving some problems and C is better than B on some other criteria, and so certain types of work are required to test or develop each of the rival theories. This is the kind of situation envisaged by Lakatos when he developed his methodology of scientific research programmes.

The forced choice cannot comfortably handle the situation of Maxwell who continued to work on his theories even though he knew they had been found wanting in tests. Maxwell hoped that his theory would come good in the end, despite a persisting run of unfavourable results. Yet another situation is even harder to comprehend in Bayesian terms. Consider a scientist at work on an important and well established theory which that scientist believes (and indeed hopes) to be false. The scientist is working on the theory with the specific aim of refuting it, thus achieving the fame assigned to those who in some small way change the course of scientific history. The scientist is really betting on the falsehood of that theory. These comments reinforce the value of detaching the idea of working on a theory from the need to have belief in it, as noted in the chapter on the Popperians.

REVIEW OF THE CASES

What do the cases do for our appraisal of Bayesian subjectivism? The Dorling example is very impressive on both aspects of the Lakatos scheme swallowing an anomaly and thriving on a confirmation. The case for Bayesianism (and Lakatos) is reinforced by the fact that Dorling set out to criticise Lakatos, not to praise him. And he remained critical of any attempt to sidestep refutations because he did not accept that his findings provided any justification for ignoring refutations, along the lines of anything goes.

Finally, let me emphasise that this paper is intended to attack, not to defend, the position of Lakatos, Feyerabend and some of Kuhns disciples with respect to its cavalier attitude to refutations. I find this attitude rationally justified only under certain stringent conditions: p(T) must be substantially greater than 1/2, the anomalous result must not be readily explainable by any plausible rival theory to T(Dorling, 1979, 187).

In this passage Dorling possibly gives the game away. There must not be a significant rival theory that could account for the aberrant evidence E. In the absence of a potential rival to the main theory the battle between a previously successful and wide-ranging theory in one corner (in this case Newton) and a more or less isolated hypothesis and some awkward evidence in another corner is very uneven.

For this reason, it can be argued that the Bayesian scheme lets us down when we most need help that is, in a choice between major rival systems, a time of crisis with clashing paradigms, or a major challenge as when general relativity emerged as a serious alternative to Newtonian mechanics. Presumably the major theories (say Newton and Einstein) would have their prior probabilities lowered by the existence of the other, and the supposed aim of the Bayesian calculus in this situation should be to swing support one way or the other on the basis of the most recent evidence. The problem would be to determine which particular piece of evidence should be applied to make the calculations. Each theory is bound to have a great deal of evidence in support and if there is recourse to a new piece of evidence which appears to favour one rather than the other (the situation with the so-called crucial experiment) then the Duhem-Quine problem arises to challenge the interpretation of the evidence, whichever way it appears to go.

A rather different approach can be used in this situation. It derives from a method of analysis of decision making which was referred to by Popper as the logic of the situation but was replaced by talk of situational analysis to take the emphasis off logic. So far as the Duhem-Quine problem is concerned we can hardly appeal to the logic of the situation for a resolution because it is precisely the logic of the situation that is the problem. But we can appeal to an appraisal of the situation where choices have to be made from a limited range of options.

Scientists need to work in a framework of theory. Prior to the rise of Einstein, what theory could scientists use for some hundreds of years apart from that of Newton and his followers? In the absence of a rival of comparable scope or at least significant potential there was little alternative to further elaboration of the Newtonian scheme, even if anomalies persisted or accumulated. Awkward pieces of evidence create a challenge to a ruling theory but they do not by themselves provide an alternative. The same applies to the auxiliary hypothesis on tidal friction (mentioned the first case study above), unless this happens to derive from some non-Newtonian theoretical assumptions that can be extended to rival the Newtonian scheme.

The approach by situational analysis is not hostage to any theory of probability (objective or subjective), or likelihood, or certainty or inductive proof. Nor does it need to speculate about the truth of the ruling theory, in the way that Howson and Urbach speculate about the likelihood that a theory might be true.

This brings us to the Prout example which is not nearly as impressive as the Dorling case. Howson and Urbach concluded that the Duhem-Quine problem in that instance was resolved in favour of the theory against the evidence on the basis of a high subjective probability assigned to Prouts law by contemporary chemists. In the early stages of its career Prouts law may have achieved wide acceptance by the scientific community, at least in England, and for this reason Howson and Urbach assigned a very high subjective probability to Prouts hypothesis (0.9). However Continental chemists were always skeptical and by mid-century Staas (and quite likely his Continental colleagues) had concluded that the law was an illusion (Howson and Urbach, 1989, 98). This potentially damning testimony was not invoked by Howson and Urbach to reduce p(H), but it could have been (and probably should have been). Staas may well have given Prout the benefit of the doubt for some time over the experimental methodology, but as methods improved then the fit with Prout should have improved as well. Obviously the fit did not improve and under these circumstances Prout should have become less plausible, as indeed was the case outside England. If the view of Staas was widespread, then a much lower prior probability should have been used for Prouts theory.

Another point can be made about the high prior probability assigned to the hypothesis. The calculations show that the subjective probability of the evidence sank from 0.6 to 0.073 and this turned the case in favour of the theory. But there is a flaw of logic there: presumably the whole-number atomic numbers were calculated using the same experimental equipment and the same or similar techniques that were used to estimate the atomic number of Chlorine. And the high p for Prout was based on confidence in the experimental results that were used to pose the whole-number hypothesis in the first case. The evidence that was good enough to back the Prout conjecture should have been good enough to refute it, or at least dramatically lower its probability.

In the event, Prout turned out to be wrong, even if he was on the right track in seeking fundamental building blocks. The anomalies were due to isotopes which could not be separated or detected by chemical methods. So Prouts hypothesis may have provided a framework for ongoing work until the fundamental flaw was revealed by a major theoretical advance. As was the case with Newtonian mechanics in the light of the evidence on the acceleration of the moon, a simple-minded, pragmatic approach might have provided the same outcome without need of Bayesian calculations.

Consequently it is not true to claim, with Howson and Urbach that the Bayesian model is essentially correct. By contrast, non-probabilistic theories seem to lack entirely the resources that could deal with Duhems problem (Howson and Urbach, 1989, 101).

CONCLUDING COMMENTS

It appears that the Bayesian scheme has revealed a great deal of power in the Dorling example but is quite unimpressive in the Prout example. The requirement that there should not be a major rival theory on the scene is a great disadvantage because at other times there is little option but to keep working on the theory under challenge, even if some anomalies persist. Where the serious option exists it appears that the Bayesians do not help us to make a choice.

Furthermore, internal disagreements call for solutions before the Bayesians can hope to command wider assent; perhaps the most important of these is the difference between the betting and the belief schools of thought in the allocation of subjective probabilities. There is also the worrying aspect of betting behaviour which is adduced as a possible way of allocating priors but, as we have seen, there is no real equivalent of betting in scientific practice. One of the shortcomings of the Bayesian approach appears to be an excessive reliance on a particular piece of evidence (the latest) whereas the Popperians and especially Lakatos make allowance for time to turn up a great deal of evidence so that preferences may slowly emerge.

This brings us to the point of considering just how evidence does emerge, a topic which has not yet been mentioned but is an essential part of the situation. The next chapter will examine a mode of thought dubbed the New Experimentalism to take account of the dynamics of experimental programs.

More:

critical rationalism blog – An exploration of critical …

Rationalism, Continental | Internet Encyclopedia of Philosophy

Continental rationalism is a retrospective category used to group together certain philosophers working in continental Europe in the 17th and 18th centuries, in particular, Descartes, Spinoza, and Leibniz, especially as they can be regarded in contrast with representatives of British empiricism, most notably, Locke, Berkeley, and Hume. Whereas the British empiricists held that all knowledge has its origin in, and is limited by, experience, the Continental rationalists thought that knowledge has its foundation in the scrutiny and orderly deployment of ideas and principles proper to the mind itself. The rationalists did not spurn experience as is sometimes mistakenly alleged; they were thoroughly immersed in the rapid developments of the new science, and in some cases led those developments. They held, however, that experience alone, while useful in practical matters, provides an inadequate foundation for genuine knowledge.

The fact that Continental rationalism and British empiricism are retrospectively applied terms does not mean that the distinction that they signify is anachronistic. Leibnizs New Essays on Human Understanding, for instance, outlines stark contrasts between his own way of thinking and that of Locke, which track many features of the rationalist/empiricist distinction as it tends to be applied in retrospect. There was no rationalist creed or manifesto to which Descartes, Spinoza, and Leibniz all subscribed (nor, for that matter, was there an empiricist one). Nevertheless, with due caution, it is possible to use the Continental rationalism category (and its empiricist counterpart) to highlight significant points of convergence in the philosophies of Descartes, Spinoza, and Leibniz, inter alia. These include: (1) a doctrine of innate ideas; (2) the application of mathematical method to philosophy; and (3) the use of a priori principles in the construction of substance-based metaphysical systems.

According to the Historisches Worterbuch der Philosophie, the word rationaliste appears in 16th century France, as early as 1539, in opposition to empirique. In his New Organon, first published in 1620 (in Latin), Francis Bacon juxtaposes rationalism and empiricism in memorable terms:

Those who have treated of the sciences have been either empiricists [Empirici] or dogmatists [Dogmatici]. Empiricists [Empirici], like ants, simply accumulate and use; Rationalists [Rationales], like spiders, spin webs from themselves; the way of the bee is in between: it takes material from the flowers of the garden and the field; but it has the ability to convert and digest them. (The New Organon, p. 79; Spedding, 1, 201)

Bacons association of rationalists with dogmatists in this passage foreshadows Kants use of the term dogmatisch in reference, especially, to the Wolffian brand of rationalist philosophy prevalent in 18th century Germany. Nevertheless, Bacons use of rationales does not refer to Continental rationalism, which developed only after the New Organon, but rather to the Scholastic philosophy that dominated the medieval period. Moreover, while Bacon is, in retrospect, often considered the father of modern empiricism, the above-quoted passage shows him no friendlier to the empirici than to the rationales. Thus, Bacons juxtaposition of rationalism and empiricism should not be confused with the distinction as it develops over the course of the 17th and 18th centuries, although his imagery is certainly suggestive.

The distinction appears in an influential form as the backdrop to Kants critical philosophy (which is often loosely understood as a kind of synthesis of certain aspects of Continental rationalism and British empiricism) at the end of the 18th century. However, it was not until the time of Hegel in the first half of the 19th century that the terms rationalism and empiricism were applied to separating the figures of the 17th and 18th centuries into contrasting epistemological camps in the fashion with which we are familiar today. In his Lectures on the History of Philosophy, Hegel describes an opposition between a priori thought, on the one hand, according to which the determinations which should be valid for thought should be taken from thought itself, and, on the other hand, the determination that we must begin and end and think, etc., from experience. He describes this as the opposition between Rationalismus and Empirismus (Werke 20, 121).

Perhaps the best recognized and most commonly made distinction between rationalists and empiricists concerns the question of the source of ideas. Whereas rationalists tend to think (with some exceptions discussed below) that some ideas, at least, such as the idea of God, are innate, empiricists hold that all ideas come from experience. Although the rationalists tend to be remembered for their positive doctrine concerning innate ideas, their assertions are matched by a rejection of the notion that all ideas can be accounted for on the basis of experience alone. In some Continental rationalists, especially in Spinoza, the negative doctrine is more apparent than the positive. The distinction is worth bearing in mind, in order to avoid the very false impression that the rationalists held to innate ideas because the empiricist alternative had not come along yet. (In general, the British empiricists came after the rationalists.) The Aristotelian doctrine, nihil in intellectu nisi prius in sensu (nothing in the intellect unless first in the senses), had been dominant for centuries, and it was in reaction against this that the rationalists revived in modified form the contrasting Platonic doctrine of innate ideas.

Descartes distinguishes between three kinds of ideas: adventitious (adventitiae), factitious (factae), and innate (innatae). As an example of an adventitious idea, Descartes gives the common idea of the sun (yellow, bright, round) as it is perceived through the senses. As an example of a factitious idea, Descartes cites the idea of the sun constructed via astronomical reasoning (vast, gaseous body). According to Descartes, all ideas which represent true, immutable, and eternal essences are innate. Innate ideas, for Descartes, include the idea of God, the mind, and mathematical truths, such as the fact that it pertains to the nature of a triangle that its three angles equal two right angles.

By conceiving some ideas as innate, Descartes does not mean that children are born with fully actualized conceptions of, for example, triangles and their properties. This is a common misconception of the rationalist doctrine of innate ideas. Descartes strives to correct it in Comments on a Certain Broadsheet, where he compares the innateness of ideas in the mind to the tendency which some babies are born with to contract certain diseases: it is not so much that the babies of such families suffer from these diseases in their mothers womb, but simply that they are born with a certain faculty or tendency to contract them (CSM I, 304). In other words, innate ideas exist in the mind potentially, as tendencies; they are then actualized by means of active thought under certain circumstances, such as seeing a triangular figure.

At various points, Descartes defends his doctrine of innate ideas against philosophers (Hobbes, Gassendi, and Regius, inter alia) who hold that all ideas enter the mind through the senses, and that there are no ideas apart from images. Descartes is relatively consistent on his reasons for thinking that some ideas, at least, must be innate. His principal line of argument proceeds by showing that there are certain ideas, for example, the idea of a triangle, that cannot be either adventitious or factitious; since ideas are either adventitious, factitious, or innate, by process of elimination, such ideas must be innate.

Take Descartes favorite example of the idea of a triangle. The argument that the idea of a triangle cannot be adventitious proceeds roughly as follows. A triangle is composed of straight lines. However, straight lines never enter our mind via the senses, since when we examine straight lines under a magnifying lens, they turn out to be wavy or irregular in some way. Since we cannot derive the idea of straight lines from the senses, we cannot derive the idea of a true triangle, which is made up of straight lines, through the senses. Sometimes Descartes makes the point in slightly different terms by insisting that there is no similarity between the corporeal motions of the sense organs and the ideas formed in the mind on the occasion of those motions (CSM I, 304; CSMK III, 187). One such dissimilarity, which is particularly striking, is the contrast between the particularity of all corporeal motions and the universality that pure ideas can attain when conjoined to form necessary truths. Descartes makes this point in clear terms to Regius:

I would like our author to tell me what the corporeal motion is that is capable of forming some common notion to the effect that things which are equal to a third thing are equal to each other, or any other he cares to take. For all such motions are particular, whereas the common notions are universal and bear no affinity with, or relation to, the motions. (CSM I, 304-5)

Next, Descartes has to show that the idea of a triangle is not factitious. This is where the doctrine of true and immutable natures comes in. For Descartes, if, for example, the idea that the three angles of a triangle are equal to two right angles were his own invention, it would be mutable, like the idea of a gold mountain, which can be changed at whim into the idea of a silver mountain. Instead, when Descartes thinks about his idea of a triangle, he is able to discover eternal properties of it that are not mutable in this way; hence, they are not invented (CSMK III, 184).

Since, therefore, the triangle can be neither adventitious nor factitious, it must be innate; that is to say, the mind has an innate tendency or power to form this idea from its own purely intellectual resources when prompted to do so.

Descartes insistence that there is no similarity between the corporeal motions of our sense organs and the ideas formed in the mind on the occasion of those motions raises a difficulty for understanding how any ideas could be adventitious. Since none of our ideas have any similarity to the corporeal motions of the sense organs even the idea of motion itself it seems that no ideas can in fact have their origin in a source external to the mind. The reason that we have an idea of heat in the presence of fire, for instance, is not, then, because the idea is somehow transmitted by the fire. Rather, Descartes thinks that God designed us in such a way that we form the idea of heat on the occasion of certain corporeal motions in our sense organs (and we form other sensory ideas on the occasion of other corporeal motions). Thus, there is a sense in which, for Descartes, all ideas are innate, and his tripartite division between kinds of ideas becomes difficult to maintain.

Per his so-called doctrine of parallelism, Spinoza conceives the mind and the body as one and the same thing, conceived under different attributes (to wit, thought and extension). (See Benedict de Spinoza: Metaphysics.) As a result, Spinoza denies that there is any causal interaction between mind and body, and so Spinoza denies that any ideas are caused by bodily change. Just as bodies can be affected only by other bodies, so ideas can be affected only by other ideas. This does not mean, however, that all ideas are innate for Spinoza, as they very clearly are for Leibniz (see below). Just as the body can be conceived to be affected by external objects conceived under the attribute of extension (that is, as bodies), so the mind can (as it were, in parallel) be conceived to be affected by the same objects conceived under the attribute of thought (that is, as ideas). Ideas gained in this way, from encounters with external objects (conceived as ideas) constitutes knowledge of the first kind, or imagination, for Spinoza, and all such ideas are inadequate, or in other words, confused and lacking order for the intellect. Adequate ideas, on the other hand, which can be formed via Spinozas second and third kinds of knowledge (reason and intuitive knowledge, respectively), and which are clear and distinct and have order for the intellect, are not gained through chance encounters with external objects; rather, adequate ideas can be explained in terms of resources intrinsic to the mind. (For more on Spinozas three kinds of knowledge and the distinction between adequate and inadequate ideas, see Benedict de Spinoza: Epistemology.)

The mind, for Spinoza, just by virtue of having ideas, which is its essence, has ideas of what Spinoza calls common notions, or in other words, those things which are equally in the part and in the whole. Examples of common notions include motion and rest, extension, and indeed God. Take extension for example. To think of any body however small or however large is to have a perfectly complete idea of extension. So, insofar as the mind has any idea of body (and, for Spinoza, the human mind is the idea of the human body, and so always has ideas of body), it has a perfectly adequate idea of extension. The same can be said for motion and rest. The same can also be said for God, except that God is not equally in the part and in the whole of extension only, but of all things. Spinoza treats these common notions as principles of reasoning. Anything that can be deduced on their basis is also adequate.

It is not clear if Spinozas common notions should be considered innate ideas. Spinoza speaks of active and passive ideas, and adequate and inadequate ideas. He associates the former with the intellect and the latter with the imagination, but innate idea is not an explicit category in Spinozas theory of ideas as it is in Descartes and also Leibnizs. Common notions are not in the mind independent of the minds relation with its object (the body); nevertheless, since it is the minds nature to be the idea of the body, it is part of the minds nature to have common notions. Commentators differ over the question of whether Spinoza had a positive doctrine of innate ideas; it is clear, however, that he denied that all ideas come about through encounters with external objects; moreover, he believed that those ideas which do come about through encounters with external objects are of an inferior epistemic value than those produced through the minds own intrinsic resources; this is enough to put him in the rationalist camp on the question of the origin of ideas.

Of the three great rationalists, Leibniz propounded the most thoroughgoing doctrine of innate ideas. For Leibniz, all ideas are strictly speaking innate. In a general and relatively straightforward sense, this viewpoint is a direct consequence of Leibnizs conception of individual substance. According to Leibniz, each substance is a world apart, independent of everything outside of itself except for God. Thus all our phenomena, that is to say, all the things that can ever happen to us, are only the results of our own being (L, 312); or, in Leibnizs famous phrase from the Monadology, monads have no windows, meaning there is no way for sensory data to enter monads from the outside. In this more general sense, then, to give an explanation for Leibnizs doctrine of innate ideas would be to explain his conception of individual substance and the arguments and considerations which motivate it. (See Section 4, b, iii, below for a discussion of Leibnizs conception of substance; see also Gottfried Leibniz: Metaphysics.) This would be to circumvent the issues and questions which are typically at the heart of the debate over the existence of innate ideas, which concern the extent to which certain kinds of perceptions, ideas, and propositions can be accounted for on the basis of experience. Although Leibnizs more general reasons for embracing innate ideas stem from his unique brand of substance metaphysics, Leibniz does enter into the debate over innate ideas, as it were, addressing the more specific questions regarding the source of given kinds of ideas, most notably in his dialogic engagement with Lockes philosophy, New Essays on Human Understanding.

Due to Leibnizs conception of individual substance, nothing actually comes from a sensory experience, where a sensory experience is understood to involve direct concourse with things outside of the mind. However, Leibniz does have a means for distinguishing between sensations and purely intellectual thoughts within the framework of his substance metaphysics. For Leibniz, although each monad or individual substance expresses (or represents) the entire universe from its own unique point of view, it does so with a greater or lesser degree of clarity and distinctness. Bare monads, such as comprise minerals and vegetation, express the rest of the world only in the most confused fashion. Rational minds, by contrast, have a much greater proportion of clear and distinct perceptions, and so express more of the world clearly and distinctly than do bare monads. When an individual substance attains a more perfect expression of the world (in the sense that it attains a less confused expression of the world), it is said to act; when its expression becomes more confused, it is said to be acted upon. Using this distinction, Leibniz is able to reconcile the terms of his philosophy with everyday conceptions. Although, strictly speaking, no monad is acted upon by any other, nor acts upon any other directly, it is possible to speak this way, just as, Leibniz says, Copernicans can still speak of the motion of the sun for everyday purposes, while understanding that the sun does not in fact move. It is in this sense that Leibniz enters into the debate concerning the origin of our ideas.

Leibniz distinguishes between ideas (ides) and thoughts (penses) (or, sometimes, notions (notions) or concepts (conceptus)). Ideas exist in the soul whether we actually perceive them or are aware of them or not. It is these ideas that Leibniz contends are innate. Thoughts, by contrast is Leibnizs designation for ideas which we actually form or conceive at any given time. In this sense, thoughts can be formed on the basis of a sensory experience (with the above caveats regarding the meaning a sensory experience can have in Leibnizs thought) or on the basis of an internal experience, or a reflection. Leibniz alternatively characterizes our ideas as aptitudes, preformations, and as dispositions to represent something when the occasion for thinking of it arises. On multiple occasions, Leibniz uses the metaphor of the veins present in marble to illustrate his understanding of innate ideas. Just as the veins dispose the sculptor to shape the marble in certain ways, so do our ideas dispose us to have certain thoughts on the occasion of certain experiences.

Leibniz rejects the view that the mind cannot have ideas without being aware that it has them. (See Gottfried Leibniz: Philosophy of Mind.) Much of the disagreement between Locke and Leibniz on the question of innate ideas turns on this point, since Locke (at least as Leibniz represents him in the New Essays) is not able to make any sense out of the notion that the mind can have ideas without being aware of them. Much of Leibnizs defense of his innate ideas doctrine takes the form of replying to Lockes charge that it is absurd to hold that the mind could think (that is, have ideas) without being aware of it.

Leibniz marshals several considerations in support of his view that the mind is not always aware of its ideas. The fact that we can store many more ideas in our understanding than we can be aware of at any given time is one. Leibniz also points to the phenomenology of attention; we do not attend to everything in our perceptual field at any given time; rather we focus on certain things at the expense of others. To convey a sense of what it might be like for the mind to have perceptions and ideas in a dreamless sleep, Leibniz asks the reader to imagine subtracting our attention from perceptual experience; since we can distinguish between what is attended to and what is not, subtracting attention does not eliminate perception altogether.

While such considerations suggest the possibility of innate ideas, they do not in and of themselves prove that innate ideas are necessary to explain the full scope of human cognition. The empiricist tends to think that if innate ideas are not necessary to explain cognition, then they should be abandoned as gratuitous metaphysical constructs. Leibniz does have arguments designed to show that innate ideas are needed for a full account of human cognition.

In the first place, Leibniz recalls favorably the famous scenario from Platos Meno where Socrates teaches a slave boy to grasp abstract mathematical truths merely by asking questions. The anecdote is supposed to indicate that mathematical truths can be generated by the mind alone, in the absence of particular sensory experiences, if only the mind is prompted to discover what it contains within itself. Concerning mathematics and geometry, Leibniz remarks: one could construct these sciences in ones study and even with ones eyes closed, without learning from sight or even from touch any of the needed truths (NE, 77). So, on these grounds, Leibniz contends that without innate ideas, we could not explain the sorts of cognitive capacities exhibited in the mathematical sciences.

A second argument concerns our capacity to grasp certain necessary or eternal truths. Leibniz says that necessary truths can be suggested, justified, and confirmed by experience, but that they can be proved only by the understanding alone (NE, 80). Leibniz does not explain this point further, but he seems to have in mind the point later made by both Hume and Kant (to different ends), that experience on its own can never account for the kind of certainty that we find in mathematical and metaphysical truths. For Leibniz, if it can be granted that we can be certain of propositions in mathematics and metaphysics and Leibniz thinks this must be granted recourse must be had to principles innate to the mind in order to explain our ability to be certain of such things.

It is worth noting briefly the position of Nicolas Malebranche on innate ideas, since Malebranche is often considered among the rationalists, yet he denied the doctrine of innate ideas. Malebranches reasons for rejecting innate ideas were anything but empiricist in nature, however. His leading objection stems from the infinity of ideas that the mind is able to form independently of the senses; as an example, Malebranche cites the infinite number of triangles of which the mind could in principle, albeit not in practice, form ideas. Unlike Descartes and Leibniz, who view innate ideas as tendencies or dispositions to form certain thoughts under certain circumstances, Malebranche understands them as fully formed entities that would have to exist somehow in the mind were they to exist there innately. Given this conception, Malebranche finds it unlikely that God would have created so many things along with the mind of man (The Search After Truth, p. 227). Since God already contains the ideas of all things within Himself, Malebranche thinks that it would be much more economical if God were simply to reveal to us the ideas of things that already exist in him rather than placing an infinity of ideas in each human mind. Malebranches tenet that we see all things in God thus follows upon the principle that God always acts in the simplest ways. Malebranche finds further support for this doctrine from the fact that it places human minds in a position of complete dependence on God. Thus, if Malebranches rejection of innate ideas distinguishes him from other rationalists, it does so not from an empiricist standpoint, but rather because of the extent to which his position on ideas is theologically motivated.

In one sense, what it means to be a rationalist is to model philosophy on mathematics, and, in particular, geometry. This means that the rationalist begins with definitions and intuitively self-evident axioms and proceeds thence to deduce a philosophical system of knowledge that is both certain and complete. This at least is the goal and (with some qualifications to be explored below) the claim. In no work of rationalist philosophy is this procedure more apparent than in Spinozas Ethics, laid out famously in the geometrical manner (more geometrico). Nevertheless, Descartes main works (and those of Leibniz as well), although not as overtly more geometrico as Spinozas Ethics, are also modelled after geometry, and it is Descartes celebrated methodological program that first introduces mathematics as a model for philosophy.

Perhaps Descartes clearest and most well-known statement of mathematics role as paradigm appears in the Discourse on the Method:

Those long chains of very simple and easy reasonings, which geometers customarily use to arrive at their most difficult demonstrations, had given me occasion to suppose that all the things which can fall under human knowledge are interconnected in the same way. (CSM I, 120)

However, Descartes promotion of mathematics as a model for philosophy dates back to his early, unfinished work, Rules for the Direction of the Mind. It is in this work that Descartes first outlines his standards for certainty that have since come to be so closely associated with him and with the rationalist enterprise more generally.

In Rule 2, Descartes declares that henceforth only what is certain should be valued and counted as knowledge. This means the rejection of all merely probable reasoning, which Descartes associates with the philosophy of the Schools. Descartes admits that according to this criterion, only arithmetic and geometry thus far count as knowledge. But Descartes does not conclude that only in these disciplines is it possible to attain knowledge. According to Descartes, the reason that certainty has eluded philosophers has as much to do with the disdain that philosophers have for the simplest truths as it does with the subject matter. Admittedly, the objects of arithmetic and geometry are especially pure and simple, or, as Descartes will later say, clear and distinct. Nevertheless, certainty can be attained in philosophy as well, provided the right method is followed.

Descartes distinguishes between two ways of achieving knowledge: through experience and through deduction [] [W]e must note that while our experiences of things are often deceptive, the deduction or pure inference of one thing from another can never be performed wrongly by an intellect which is in the least degree rational [] (CSM I, 12). This is a clear statement of Descartes methodological rationalism. Building up knowledge through accumulated experience can only ever lead to the sort of probable knowledge that Descartes finds lacking. Pure inference, by contrast, can never go astray, at least when it is conducted by right reason. Of course, the truth value of a deductive chain is only as good as the first truths, or axioms, whose truth the deductions preserve. It is for this reason that Descartes method relies on intuition as well as deduction. Intuition provides the first principles of a deductive system, for Descartes. Intuition differs from deduction insofar as it is not discursive. Intuition grasps its object in an immediate way. In its broadest outlines, Descartes method is just the use of intuition and deduction in the orderly attainment and preservation of certainty.

In subsequent Rules, Descartes goes on to elaborate a more specific methodological program, which involves reducing complicated matters step by step to simpler, intuitively graspable truths, and then using those simple truths as principles from which to deduce knowledge of more complicated matters. It is generally accepted by scholars that this more specific methodological program reappears in a more iconic form in the Discourse on the Method as the four rules for gaining knowledge outlined in Part 2. There is some doubt as to the extent to which this more specific methodological program actually plays any role in Descartes mature philosophy as it is expressed in the Meditations and Principles (see Garber 2001, chapter 2). There can be no doubt, however, that the broader methodological guidelines outlined above were a permanent feature of Descartes thought.

In response to a request to cast his Meditations in the geometrical style (that is, in the style of Euclids Elements), Descartes distinguishes between two aspects of the geometrical style: order and method, explaining:

The order consists simply in this. The items which are put forward first must be known entirely without the aid of what comes later; and the remaining items must be arranged in such a way that their demonstration depends solely on what has gone before. I did try to follow this order very carefully in my Meditations [] (CSM II, 110)

Elsewhere, Descartes contrasts this order, which he calls the order of reasons, with another order, which he associates with scholasticism, and which he calls the order of subject-matter (see CSMK III, 163). What Descartes understands as geometrical order or the order of reasons is just the procedure of starting with what is most simple, and proceeding in a step-wise, deliberate fashion to deduce consequences from there. Descartes order is governed by what can be clearly and distinctly intuited, and by what can be clearly and distinctly inferred from such self-evident intuitions (rather than by a concern for organizing the discussion into neat topical categories per the order of subject-matter)

As for method, Descartes distinguishes between analysis and synthesis. For Descartes, analysis and synthesis represent different methods of demonstrating a conclusion or set of conclusions. Analysis exhibits the path by which the conclusion comes to be grasped. As such, it can be thought of as the order of discovery or order of knowledge. Synthesis, by contrast, wherein conclusions are deduced from a series of definitions, postulates, and axioms, as in Euclids Elements, for instance, follows not the order in which things are discovered, but rather the order that things bear to one another in reality. As such, it can be thought of as the order of being. God, for example, is prior to the human mind in the order of being (since God created the human mind), and so in the synthetic mode of demonstration the existence of God is demonstrated before the existence of the human mind. However, knowledge of ones own mind precedes knowledge of God, at least in Descartes philosophy, and so in the analytic mode of demonstration the cogito is demonstrated before the existence of God. Descartes preference is for analysis, because he thinks that it is superior in helping the reader to discover the things for herself, and so in bringing about the intellectual conversion which it is the Meditations goal to effectuate in the minds of its readers. According to Descartes, while synthesis, in laying out demonstrations systematically, is useful in preempting dissent, it is inferior in engaging the mind of the reader.

Two primary distinctions can be made in summarizing Descartes methodology: (1) the distinction between the order of reasons and the order of subject-matter; and (2) the analysis/synthesis distinction. With respect to the first distinction, the great Continental rationalists are united. All adhere to the order of reasons, as we have described it above, rather than the order of subject-matter. Even though the rationalists disagree about how exactly to interpret the content of the order of reasons, their common commitment to following an order of reasons is a hallmark of their rationalism. Although there are points of convergence with respect to the second, analysis/synthesis distinction, there are also clear points of divergence, and this distinction can be useful in highlighting the range of approaches the rationalists adopt to mathematical methodology.

Of the great Continental rationalists, Spinoza is the most closely associated with mathematical method due to the striking presentation of his magnum opus, the Ethics, (as well as his presentation of Descartes Principles), in geometrical fashion. The fact that Spinoza is the only major rationalist to present his main work more geometrico might create the impression that he is the only philosopher to employ mathematical method in constructing and elaborating his philosophical system. This impression is mistaken, since both Descartes and Leibniz also apply mathematical method to philosophy. Nevertheless, there are differences between Spinozas employment of mathematical method and that of Descartes (and Leibniz). The most striking, of course, is the form of Spinozas Ethics. Each part begins with a series of definitions, axioms, and postulates and proceeds thence to deduce propositions, the demonstrations of which refer back to the definitions, axioms, postulates and previously demonstrated propositions on which they depend. Of course, this is just the method of presenting findings that Descartes in the Second Replies dubbed synthesis. For Descartes, analysis and synthesis differ only in pedagogical respects: whereas analysis is better for helping the reader discover the truth for herself, synthesis is better in compelling agreement.

There is some evidence that Spinozas motivations for employing synthesis were in part pedagogical. In Lodewijk Meyers preface to Spinozas Principles of Cartesian Philosophy, Meyer uses Descartes Second Replies distinction between analysis and synthesis to explain the motivation for the work. Meyer criticizes Descartes followers for being too uncritical in their enthusiasm for Descartes thought, and attributes this in part to the relative opacity of Descartes analytic mode of presentation. Thus, for Meyer, the motivation for presenting Descartes Principles in the synthetic manner is to make the proofs more transparent, and thereby leave less excuse for blind acceptance of Descartes conclusions. It is not clear to what extent Meyers explanation of the mode of presentation of Spinozas Principles of Cartesian Philosophy applies to Spinozas Ethics. In the first place, although Spinoza approved the preface, he did not author it himself. Secondly, while such an explanation seems especially suited to a work in which Spinozas chief goal was to present another philosophers thought in a different form, there is no reason to assume that it applies to the presentation of Spinozas own philosophy. Scholars have differed on how to interpret the geometrical form of Spinozas Ethics. However, it is generally accepted that Spinozas use of synthesis does not merely represent a pedagogical preference. There is reason to think that Spinozas methodology differs from that of Descartes in a somewhat deeper way.

There is another version of the analysis/synthesis distinction besides Descartes that was also influential in the 17th century, that is, Hobbes version of the distinction. Although there is little direct evidence that Spinoza was influenced by Hobbes version of the distinction, some scholars have claimed a connection, and, in any case, it is useful to view Spinozas methodology in light of the Hobbesian alternative.

Synthesis and analysis are not modes of demonstrating findings that have already been made, for Hobbes, as they are for Descartes, but rather complementary means of generating findings; in particular, they are forms of causal reasoning. For Hobbes, analysis is reasoning from effects to causes; synthesis is reasoning in the other direction, from causes to effects. For example, by analysis, we infer that geometrical objects are constructed via the motions of points and lines and surfaces. Once motion has been established as the principle of geometry, it is then possible, via synthesis, to construct the possible effects of motion, and thereby, to make new discoveries in geometry. According to the Hobbesian schema, then, synthesis is not merely a mode of presenting truths, but a means of generating and discovering truths. (For Hobbes method, see The English Works of Thomas Hobbes of Malmesbury, vol. 1, ch. 6.) There is reason to think that synthesis had this kind of significance for Spinoza, as well as a means of discovery, not merely presentation. Spinozas methodology, and, in particular, his theory of definitions, bear this out

Spinozas method begins with reflection on the nature of a given true idea. The given true idea serves as a standard by which the mind learns the distinction between true and false ideas, and also between the intellect and the imagination, and how to direct itself properly in the discovery of true ideas. The correct formulation of definitions emerges as the most important factor in directing the mind properly in the discovery of true ideas. To illustrate his conception of a good definition, Spinoza contrasts two definitions of a circle. On one definition, a circle is a figure in which all the lines from the center to the circumference are equal. On another, a circle is the figure described by the rotation of a line around one of its ends, which is fixed. For Spinoza, the second definition is superior. Whereas the first definition gives only a property of the circle, the second provides the cause from which all of the properties can be deduced. Hence, what makes a definition a good definition, for Spinoza, is its capacity to serve as a basis for the discovery of truths about the thing. The circle, of course, is just an example. For Spinoza, the method is perfected when it arrives at a true idea of the first cause of all things, that is, God. Only the method is perfected with a true idea of God, however, not the philosophy. The philosophy itself begins with a true idea of God, since the philosophy consists in deducing the consequences from a true idea of God. With this in mind, the definition of God is of paramount importance. In correspondence, Spinoza compares contrasting definitions of God, explaining that he chose the one which expresses the efficient cause from which all of the properties of God can be deduced.

In this light, it becomes clear that the geometrical presentation of Spinozas philosophy is not merely a pedagogic preference. The definitions that appear at the outset of the five parts of the Ethics do not serve merely to make explicit what might otherwise have remained only implicit in Descartes analytic mode of presentation. Rather, key definitions, such as the definition of God, are principles that underwrite the development of the system. As a result, Hobbes conception of the analysis/synthesis distinction throws an important light on Spinozas procedure. There is a movement of analysis in arriving at the causal definition of God from the preliminary given true idea. Then there is a movement of synthesis in deducing consequences from that causal definition. Of course, Descartes analysis/synthesis distinction still applies, since, after all, Spinozas system is presented in the synthetic manner in the Ethics. But the geometrical style of presentation is not merely a pedagogical device in Spinozas case. It is also a clue to the nature of his system.

Leibniz is openly critical of Descartes distinction between analysis and synthesis, writing, Those who think that the analytic presentation consists in revealing the origin of a discovery, the synthetic in keeping it concealed, are in error (L, 233). This comment is aimed at Descartes formulation of the distinction in the Second Replies. Leibniz is explicit about his adherence to the viewpoint that seems to be implied by Spinozas methodology: synthesis is itself a means of discovering truth no less than analysis, not merely a mode of presentation. Leibnizs understanding of analysis and synthesis is closer to the Hobbesian conception, which views analysis and synthesis as different directions of causal reasoning: from effects to causes (analysis) and from causes to effects (synthesis). Leibniz formulates the distinction in his own terms as follows:

Synthesis is achieved when we begin from principles and run through truths in good order, thus discovering certain progressions and setting up tables, or sometimes general formulas, in which the answers to emerging questions can later be discovered. Analysis goes back to the principles in order to solve the given problems only [] (L, 232)

Leibniz thus conceives synthesis and analysis in relation to principles.

Leibniz lays great stress on the importance of establishing the possibility of ideas, that is to say, establishing that ideas do not involve contradiction, and this applies a fortiori to first principles. For Leibniz, the Cartesian criterion of clear and distinct perception does not suffice for establishing the possibility of an idea. Leibniz is critical, in particular, of Descartes ontological argument on the grounds that Descartes neglects to demonstrate the possibility of the idea of a most perfect being on which the argument depends. It is possible to mistakenly assume that an idea is possible, when in reality it is contradictory. Leibniz gives the example of a wheel turning at the fastest possible rate. It might at first seem that this idea is legitimate, but if a spoke of the wheel were extended beyond the rim, the end of the spoke would move faster than a nail in the rim itself, revealing a contradiction in the original notion.

For Leibniz, there are two ways of establishing the possibility of an idea: by experience (a posteriori) and by reducing concepts via analysis down to a relation of identity (a priori). Leibniz credits mathematicians and geometers with pushing the practice of demonstrating what would otherwise normally be taken for granted the furthest. For example, in Meditations on Knowledge, Truth, and Ideas, Leibniz writes, That brilliant genius Pascal agrees entirely with these principles when he says, in his famous dissertation on the geometrical spirit [] that it is the task of the geometer to define all terms though ever so little obscure and to prove all truths though little doubtful (L, 294). Leibniz credits his own doctrine of the possibility of ideas with clarifying exactly what it means for something to be beyond doubt and obscurity.

Leibniz describes the result of the reduction of concepts to identity variously as follows: when the thing is resolved into simple primitive notions understood in themselves (L, 231); when every ingredient that enters into a distinct concept is itself known distinctly; when analysis is carried through to the end (L, 292). Since, for Leibniz, all true ideas can be reduced to simple identities, it is, in principle, possible to derive all truths via a movement of synthesis from such simple identities in the way that mathematicians produce systems of knowledge on the basis of their basic definitions and axioms. This kind of a priori knowledge of the world is restricted to God, however. According to Leibniz, it is only possible for our finite minds to have this kind of knowledge which Leibniz calls intuitive or adequate in the case of things which do not depend on experience, or what Leibniz also calls truths of reason, which include abstract logical and metaphysical truths, and mathematical propositions. In the case of truths of fact, by contrast, with the exception of immediately graspable facts of experience, such as, I think, and Various things are thought by me, we are restricted to formulating hypotheses to explain the phenomena of sensory experience, and such knowledge of the world can, for us, only ever achieve the status of hypothesis, though our hypothetical knowledge can be continually improved and refined. (See Section 5, c, below for a discussion of hypotheses in Leibniz.)

Leibniz is in line with his rationalist predecessors in emphasizing the importance of proper order in philosophizing. Leibnizs emphasis on establishing the possibility of ideas prior to using them in demonstrating propositions could be understood as a refinement of the geometrical order that Descartes established over against the order of subject-matter. Leibniz emphasizes order in another connection vis–vis Locke. As Leibniz makes clear in his New Essays, one of the clearest points of disagreement between him and Locke is on the question of innate ideas. In preliminary comments that Leibniz drew up upon first reading Lockes Essay, and which he sent to Locke via Burnett, Leibniz makes the following point regarding philosophical order:

Concerning the question whether there are ideas and truths born with us, I do not find it absolutely necessary for the beginnings, nor for the practice of the art of thinking, to answer it; whether they all come to us from outside, or they come from within us, we will reason correctly provided that we keep in mind what I said above, and that we proceed with order and without prejudice. The question of the origin of our ideas and of our maxims is not preliminary in philosophy, and it is necessary to have made great progress in order to resolve it. (Philosophische Schriften, vol. 5, pp. 15-16)

Leibnizs allusion to what he said above refers to remarks regarding the establishment of the possibility of ideas via experience and the principle of identity. This passage makes it clear that, from Leibnizs point of view, the order in which Locke philosophizes is quite misguided, since Locke begins with a question that should only be addressed after great progress has already been made, particularly with respect to the criteria for distinguishing between true and false ideas, and for establishing legitimate philosophical principles. Empiricists generally put much less emphasis on the order of philosophizing, since they do not aim to reason from first principles grasped a priori.

A fundamental tenet of rationalism perhaps the fundamental tenet is that the world is intelligible. The intelligibility tenet means that everything that happens in the world happens in an orderly, lawful, rational manner, and that the mind, in principle, if not always in practice, is able to reproduce the interconnections of things in thought provided that it adheres to certain rules of right reasoning. The intelligibility of the world is sometimes couched in terms of a denial of brute facts, where a brute fact is something that just is the case, that is, something that obtains without any reason or explanation (even in principle). Many of the a priori principles associated with rationalism can be understood either as versions or implications of the principle of intelligibility. As such, the principle of intelligibility functions as a basic principle of rationalism. It appears under various guises in the great rationalist systems and is used to generate contrasting philosophical systems. Indeed, one of the chief criticisms of rationalism is the fact that its principles can consistently be used to generate contradictory conclusions and systems of thought. The clearest and best known statement of the intelligibility of the world is Leibnizs principle of sufficient reason. Some scholars have recently emphasized this principle as the key to understanding rationalism (see Della Rocca 2008, chapter 1).

The intelligibility principle raises some classic philosophical problems. Chief among these is a problem of question-begging or circularity. The task of proving that the world is intelligible seems to have to rely on some of the very principles of reasoning in question. In the 17th century, discussion of this fundamental problem centered around the so-called Cartesian circle. The problem is still debated by scholars of 17th century thought today. The viability of the rationalist enterprise seems to depend, at least in part, on a satisfactory answer to this problem.

The most important rational principle in Descartes philosophy, the principle which does a great deal of the work in generating its details, is the principle according to which whatever is clearly and distinctly perceived to be true is true. This principle means that if we can form any clear and distinct ideas, then we will be able to trust that they accurately represent their objects, and give us certain knowledge of reality. Descartes clear and distinct ideas doctrine is central to his conception of the worlds intelligibility, and indeed, it is central to the rationalists conception of the worlds intelligibility more broadly. Although Spinoza and Leibniz both work to refine understanding of what it is to have clear and distinct ideas, they both subscribe to the view that the mind, when directed properly, is able to accurately represent certain basic features of reality, such as the nature of substance.

For Descartes, it cannot be taken for granted from the outset that what we clearly and distinctly perceive to be true is in fact true. It is possible to entertain the doubt that an all-powerful deceiving being fashioned the mind so that it is deceived even in those things it perceives clearly and distinctly. Nevertheless, it is only possible to entertain this doubt when we are not having clear and distinct perceptions. When we are perceiving things clearly and distinctly, their truth is undeniable. Moreover, we can use our capacity for clear and distinct perceptions to demonstrate that the mind was not fashioned by an all-powerful deceiving being, but rather by an all-powerful benevolent being who would not fashion us so as to be deceived even when using our minds properly. Having proved the existence of an all-powerful benevolent being qua creator of our minds, we can no longer entertain any doubts regarding our clear and distinct ideas even when we are not presently engaged in clear and distinct perceptions.

Descartes legitimation of clear and distinct perception via his proof of a benevolent God raises notorious interpretive challenges. Scholars disagree about how to resolve the problem of the Cartesian circle. However, there is general consensus that Descartes procedure is not, in fact, guilty of vicious, logical circularity. In order for Descartes procedure to avoid circularity, it is generally agreed that in some sense clear and distinct ideas need already to be legitimate before the proof of Gods existence. It is only in another sense that Gods existence legitimates their truth. Scholars disagree on how exactly to understand those different senses, but they generally agree that there is some sense at least in which clear and distinct ideas are self-legitimating, or, otherwise, not in need of legitimation.

That some ideas provide a basic standard of truth is a fundamental tenet of rationalism, and undergirds all the other rationalist principles at work in the construction of rationalist systems of philosophy. For the rationalists, if it cannot be taken for granted in at least some sense from the outset that the mind is capable of discerning the difference between truth and falsehood, then one never gets beyond skepticism.

The Continental rationalists deploy the principle of intelligibility and subordinate rational principles derived from it in generating much of the content of their respective philosophical systems. In no aspect of their systems is the application of rational principles to the generation of philosophical content more evident and more clearly illustrative of contrasting interpretations of these principles than in that for which the Continental rationalists are arguably best known: substance metaphysics.

Descartes deploys his clear and distinct ideas doctrine in justifying his most well-known metaphysical position: substance dualism. The first step in Descartes demonstration of mind-body dualism, or, in his terminology, of a real distinction (that is, a distinction between two substances) between mind and body is to show that while it is possible to doubt that one has a body, it is not possible to doubt that one is thinking. As Descartes makes clear in the Principles of Philosophy, one of the chief upshots of his famous cogito argument is the discovery of the distinction between a thinking thing and a corporeal thing. The impossibility of doubting ones existence is not the impossibility of doubting that one is a human being with a body with arms and legs and a head. It is the impossibility of doubting, rather, that one doubts, perceives, dreams, imagines, understands, wills, denies, and other modalities that Descartes attributes to the thinking thing. It is possible to think of oneself as a thing that thinks, and to recognize that it is impossible to doubt that one thinks, while continuing to doubt that one has a body with arms and legs and a head. So, the cogito drives a preliminary wedge between mind and body.

At this stage of the argument, however, Descartes has simply established that it is possible to conceive of himself as a thinking thing without conceiving of himself as a corporeal thing. It remains possible that, in fact, the thinking thing is identical with a corporeal thing, in other words, that thought is somehow something a body can do; Descartes has yet to establish that the epistemological distinction between his knowledge of his mind and his knowledge of body that results from the hyperbolic doubt translates to a metaphysical or ontological distinction between mind and body. The move from the epistemological distinction to the ontological distinction proceeds via the doctrine of clear and distinct ideas. Having established that whatever he clearly and distinctly perceives is true, Descartes is in a position to affirm the real distinction between mind and body.

In this life, it is never possible to clearly and distinctly perceive a mind actually separate from a body, at least in the case of finite, created minds, because minds and bodies are intimately unified in the composite human being. So Descartes cannot base his proof for the real distinction of mind and body on the clear and distinct perception that mind and body are in fact independently existing things. Rather, Descartes argument is based on the joint claims that (1) it is possible to have a clear and distinct idea of thought apart from extension and vice versa; and (2) whatever we can clearly and distinctly understand is capable of being created by God exactly as we clearly and distinctly understand it. Thus, the fact that we can clearly and distinctly understand thought apart from extension and vice versa entails that thinking things and extended things are really distinct (in the sense that they are distinct substances separable by God).

The foregoing argument relies on certain background assumptions which it is now necessary to explain, in particular, Descartes conception of substance. In the Principles, Descartes defines substance as a thing which exists in such a way as to depend on no other thing for its existence (CSM I, 210). Properly speaking, only God can be understood to depend on no other thing, and so only God is a substance in the absolute sense. Nevertheless, Descartes allows that, in a relative sense, created things can count as substances too. A created thing is a substance if the only thing it relies upon for its existence is the ordinary concurrence of God (ibid.). Only mind and body qualify as substances in this secondary sense. Everything else is a modification or property of minds and bodies. A second point is that, for Descartes, we do not have a direct knowledge of substance; rather, we come to know substance by virtue of its attributes. Thought and extension are the attributes or properties in virtue of which we come to know thinking and corporeal substance, or mind and body. This point relies on the application of a key rational principle, to wit, nothingness has no properties. For Descartes, there cannot simply be the properties of thinking and extension without these properties having something in which to inhere. Thinking and extension are not just any properties; Descartes calls them principal attributes because they constitute the nature of their respective substances. Other, non-essential properties, cannot be understood without the principal attribute, but the principal attribute can be understood without any of the non-essential properties. For example, motion cannot be understood without extension, but extension can be understood without motion.

Descartes conception of mind and body as distinct substances includes some interesting corollaries which result from a characteristic application of rational principles and account for some characteristic doctrinal differences between Descartes and empiricist philosophers. One consequence of Descartes conception of the mind as a substance whose principal attribute is thought is that the mind must always be thinking. Since, for Descartes, thinking is something of which the thinker is necessarily aware, Descartes commitment to thought as an essential, and therefore, inseparable, property of the mind raises some awkward difficulties. Arnauld, for example, raises one such difficulty in his Objections to Descartes Meditations: presumably there is much going on in the mind of an infant in its mothers womb of which the infant is not aware. In response to this objection, and also in response to another obvious problem, that is, that of dreamless sleep, Descartes insists on a distinction between being aware of or conscious of our thoughts at the time we are thinking them, and remembering them afterwards (CSMK III, 357). The infant is, in fact, aware of its thinking in the mothers womb, but it is aware only of very confused sensory thoughts of pain and pleasure and heat (not, as Descartes points out, metaphysical matters (CSMK III, 189)) which it does not remember afterwards. Similarly, the mind is always thinking even in the most dreamless sleep, it is just that the mind often immediately forgets much of what it had been aware.

Descartes commitment to embracing the implications however counter-intuitive of his substance-attribute metaphysics, puts him at odds with, for instance, Locke, who mocks the Cartesian doctrine of the always-thinking soul in his An Essay Concerning Human Understanding. For Locke, the question whether the soul is always thinking or not must be decided by experience and not, as Locke says, merely by hypothesis (An Essay Concerning Human Understanding, Book II, Chapter 1). The evidence of dreamless sleep makes it obvious, for Locke, that the soul is not always thinking. Because Locke ties personal identity to memory, if the soul were to think while asleep without knowing it, the sleeping man and the waking man would be two different persons.

Descartes commitment to the always-thinking mind is a consequence of his commitment to a more basic rational principle. In establishing his conception of thinking substance, Descartes reasons from the attribute of thinking to the substance of thinking on the grounds that nothing has no properties. In this case, he reasons in the other direction, from the substance of thinking, that is, the mind, to the property of thinking on the converse grounds that something must have properties, and the properties it must have are the properties that make it what it is; in the case of the mind, that property is thought. (Leibniz found a way to maintain the integrity of the rational principle without contradicting experience: admit that thinking need not be conscious. This way the mind can still think in a dreamless sleep, and so avoid being without any properties, without any problem about the recollection of awareness.)

Another consequence of Descartes substance metaphysics concerns corporeal substance. For Descartes, we do not know corporeal substance directly, but rather through a grasp of its principal attribute, extension. Extension qua property requires a substance in which to inhere because of the rational principle, nothing has no properties. This rational principle leads to another characteristic Cartesian position regarding the material world: the denial of a vacuum. Descartes denies that space can be empty or void. Space has the property of being extended in length, breadth, and depth, and such properties require a substance in which to inhere. Thus, nothing, that is, a void or vacuum, is not able to have such properties because of the rational principle, nothing has no properties. This means that all space is filled with substance, even if it is imperceptible. Once again, Descartes answers a debated philosophical question on the basis of a rational principle.

If Descartes is known for his dualism, Spinoza, of course, is known for monism the doctrine that there is only one substance. Spinozas argument for substance monism (laid out in the first fifteen propositions of the Ethics) has no essential basis in sensory experience; it proceeds through rational argumentation and the deployment of rational principles; although Spinoza provides one a posteriori argument for Gods existence, he makes clear that he presents it only because it is easier to grasp than the a priori arguments, and not because it is in any way necessary.

The crucial step in the argument for substance monism comes in Ethics 1p5: In Nature there cannot be two or more substances of the same nature or attribute. It is at this proposition that Descartes (and Leibniz, and many others) would part ways with Spinoza. The most striking and controversial implication of this proposition, at least from a Cartesian perspective, is that human minds cannot qualify as substances, since human minds all share the same nature or attribute, that is, thought. In Spinozas philosophy, human minds are actually themselves properties Spinoza calls them modes of a more basic, infinite substance.

The argument for 1p5 works as follows. If there were two or more distinct substances, there would have to be some way to distinguish between them. There are two possible distinctions to be made: either by a difference in their affections or by a difference in their attributes. For Spinoza, a substance is something which exists in itself and can be conceived through itself; an attribute is what the intellect perceives of a substance, as constituting its essence (Ethics 1d4). Spinozas conception of attributes is a matter of longstanding scholarly debate, but for present purposes, we can think of it along Cartesian lines. For Descartes, substance is always grasped through a principal property, which is the nature or essence of the substance. Spinoza agrees that an attribute is that through which the mind conceives the nature or essence of substance. With this in mind, if a distinction between two substances were to be made on the basis of a difference in attributes, then there would not be two substances of the same attribute as the proposition indicates. This means that if there were two substances of the same attribute, it would be necessary to distinguish between them on the basis of a difference in modes or affections.

Spinoza conceives of an affection or mode as something which exists in another and needs to be conceived through another. Given this conception of affections, it is impossible, for Spinoza, to distinguish between two substances on the basis of a difference in affections. Doing so would be somewhat akin to affirming that there are two apples on the basis of a difference between two colors, when one apple can quite possibly have a red part and a green part. As color differences do not per se determine differences between apples, in a similar way, modal differences cannot determine a difference between substances you could just be dealing with one substance bearing multiple different affections. It is notable that in 1p5, Spinoza uses virtually the same substance-attribute schema as Descartes to deny a fundamental feature of Descartes system.

Having established 1p5, the next major step in Spinozas argument for substance monism is to establish the necessary existence and infinity of substance. For Spinoza, if things have nothing in common with each other, one cannot be the cause of the other. This thesis depends upon assumptions that lie at the heart of Spinozas rationalism. Something that has nothing in common with another thing cannot be the cause of the other thing because things that have nothing in common with one another cannot be understood through one another (Ethics 1a5). But, for Spinoza, effects should be able to be understood through causes. Indeed, what it is to understand something, for Spinoza, is to understand its cause. The order of knowledge, provided that the knowledge is genuine, or, as Spinoza says, adequate, must map onto the order of being, and vice versa. Thus, Spinozas claim that if things have nothing in common with one another, one cannot be the cause of the other, is an expression of Spinozas fundamental, rationalist commitment to the intelligibility of the world. Given this assumption, and given the fact that no two substances have anything in common with one another, since no two substances share the same nature or attribute, it follows that if a substance is to exist, it must exist as causa sui (self-caused); in other words, it must pertain to the essence of substance to exist. Moreover, Spinoza thinks that since there is nothing that has anything in common with a given substance, there is therefore nothing to limit the nature of a given substance, and so every substance will necessarily be infinite. This assertion depends on another deep-seated assumption of Spinozas philosophy: nothing limits itself, but everything by virtue of its very nature affirms its own nature and existence as much as possible.

At this stage, Spinoza has argued that substances of a single attribute exist necessarily and are necessarily infinite. The last major stage of the argument for substance monism is the transition from multiple substances of a single attribute to only one substance of infinite attributes. Scholars have expressed varying degrees of satisfaction with the lucidity of this transition. It seems to work as follows. It is possible to attribute many attributes to one substance. The more reality or being each thing has, the more attributes belong to it. Therefore, an absolutely infinite being is a being that consists of infinite attributes. Spinoza calls an absolutely infinite being or substance consisting of infinite attributes God. Spinoza gives four distinct arguments for Gods existence in Ethics 1p11. The first is commonly interpreted as Spinozas version of an ontological argument. It refers back to 1p7 where Spinoza proved that it pertains to the essence of substance to exist. The second argument is relevant to present purposes, since it turns on Spinozas version of the principle of sufficient reason: For each thing there must be assigned a cause, or reason, both for its existence and for its nonexistence (Ethics 1p11dem). But there can be no reason for Gods nonexistence for the same reasons that all substances are necessarily infinite: there is nothing outside of God that is able to limit Him, and nothing limits itself. Once again, Spinozas argument rests upon his assumption that things by nature affirm their own existence. The third argument is a posteriori, and the fourth pivots like the second on the assumption that things by nature affirm their own existence.

Having proven that a being consisting of infinite attributes exists, Spinozas argument for substance monism is nearly complete. It remains only to point out that no substance besides God can exist, because if it did, it would have to share at least one of Gods infinite attributes, which, by 1p5, is impossible. Everything that exists, then, is either an attribute or an affection of God.

Leibnizs universe consists of an infinity of monads or simple substances, and God. For Leibniz, the universe must be composed of monads or simple substances. His justification for this claim is relatively straightforward. There must be simples, because there are compounds, and compounds are just collections of simples. To be simple, for Leibniz, means to be without parts, and thus to be indivisible. For Leibniz, the simples or monads are the true atoms of nature (L, 643). However, material atoms are contrary to reason (L, 456). Manifold a priori considerations lead Leibniz to reject material atoms. In the first place, the notion of a material atom is contradictory in Leibnizs view. Matter is extended, and that which is extended is divisible into parts. The very notion of an atom, however, is the notion of something indivisible, lacking parts.

From a different perspective, Leibnizs dynamical investigations provide another argument against material atoms. Absolute rigidity is included in the notion of a material atom, since any elasticity in the atom could only be accounted for on the basis of parts within the atom shifting their position with respect to each other, which is contrary to the notion of a partless atom. According to Leibnizs analysis of impact, however, absolute rigidity is shown not to make sense. Consider the rebound of one atom as a result of its collision with another. If the atoms were absolutely rigid, the change in motion resulting from the collision would have to happen instantaneously, or, as Leibniz says, through a leap or in a moment (L, 446). The atom would change from initial motion to rest to rebounded motion without passing through any intermediary degrees of motion. Since the body must pass through all the intermediary degrees of motion in transitioning from one state of motion to another, it must not be absolutely rigid, but rather elastic; the analysis of the parts of the body must, in correlation with the degree of motion, proceed to infinity. Leibnizs dynamical argument against material atoms turns on what he calls the law of continuity, an a priori principle according to which no change occurs through a leap.

The true unities, or true atoms of nature, therefore, cannot be material; they must be spiritual or metaphysical substances akin to souls. Since Leibnizs spiritual substances, or monads, are absolutely simple, without parts, they admit neither of dissolution nor composition. Moreover, there can be no interaction between monads, monads cannot receive impressions or undergo alterations by means of being affected from the outside, since, in Leibnizs famous phrase from the Monadology, monads have no windows (L, 643). Monads must, however, have qualities, otherwise there would be no way to explain the changes we see in things and the diversity of nature. Indeed, following from Leibnizs principle of the identity of indiscernibles, no two monads can be exactly alike, since each monad stands in a unique relation to the rest, and, for Leibniz, each monads relation to the rest is a distinctive feature of its nature. The way in which, for Leibniz, monads can have qualities while remaining simple, or in other words, the only way there can be multitude in simplicity is if monads are characterized and distinguished by means of their perceptions. Leibnizs universe, in summary, consists in monads, simple spiritual substances, characterized and distinguished from one another by a unique series of perceptions determined by each monads unique relationship vis–vis the others.

Of the great rationalists, Leibniz is the most explicit about the principles of reasoning that govern his thought. Leibniz singles out two, in particular, as the most fundamental rational principles of his philosophy: the principle of contradiction and the principle of sufficient reason. According to the principle of contradiction, whatever involves a contradiction is false. According to the principle of sufficient reason, there is no fact or true proposition without there being a sufficient reason for its being so and not otherwise (L, 646). Corresponding to these two principles of reasoning are two kinds of truths: truths of reasoning and truths of fact. For Leibniz, truths of reasoning are necessary, and their opposite is impossible. Truths of fact, by contrast, are contingent, and their opposite is possible. Truths of reasoning are by most commentators associated with the principle of contradiction because they can be reduced via analysis to a relation between two primitive ideas, whose identity is intuitively evident. Thus, it is possible to grasp why it is impossible for truths of reasoning to be otherwise. However, this kind of resolution is only possible in the case of abstract propositions, such as the propositions of mathematics (see Section 3, c, above). Contingent truths, or truths of fact, by contrast, such as Caesar crossed the Rubicon, to use the example Leibniz gives in the Discourse on Metaphysics, are infinitely complicated. Although, for Leibniz, every predicate is contained in its subject, to reduce the relationship between Caesars notion and his action of crossing the Rubicon would require an infinite analysis impossible for finite minds. Caesar crossed the Rubicon is a contingent proposition, because there is another possible world in which Caesar did not cross the Rubicon. To understand the reason for Caesars crossing, then, entails understanding why this world exists rather than any other possible world. It is for this reason that contingent truths are associated with the principle of sufficient reason. Although the opposite of truths of fact is possible, there is nevertheless a sufficient reason why the fact is so and not otherwise, even though this reason cannot be known by finite minds.

Truths of fact, then, need to be explained; there must be a sufficient reason for them. However, according to Leibniz, a sufficient reason for existence cannot be found merely in any one individual thing or even in the whole aggregate and series of things (L, 486). That is to say, the sufficient reason for any given contingent fact cannot be found within the world of which it is a part. The sufficient reason must explain why this world exists rather than another possible world, and this reason must lie outside the world itself. For Leibniz, the ultimate reason for things must be contained in a necessary substance that creates the world, that is, God. But if the existence of God is to ground the series of contingent facts that make up the world, there must be a sufficient reason why God created this world rather than any of the other infinite possible worlds contained in his understanding. As a perfect being, God would only have chosen to bring this world into existence rather than any other because it is the best of all possible worlds. Gods choice, therefore, is governed by the principle of fitness, or what Leibniz also calls the principle of the best (L, 647). The best world, according to Leibniz, is the one which maximizes perfection; and the most perfect world is the one which balances the greatest possible variety with the greatest possible order. God achieves maximal perfection in the world through what Leibniz calls the pre-established harmony. Although the world is made up of an infinity of monads with no direct interaction with one another, God harmonizes the perceptions of each monad with the perceptions of every other monad, such that each monad represents a unique perspective on the rest of the universe according to its position vis–vis the others.

According to Leibnizs philosophy, in the case of all true propositions, the predicate is contained in the subject. This is often known as the predicate-in-notion principle. The relationship between predicate and subject can only be reduced to an identity relation in the case of truths of reason, whereas in the case of truths of fact, the reduction requires an infinite analysis. Nevertheless, in both cases, it is possible in principle (which is to say, for an infinite intellect) to know everything that will ever happen to an individual substance, and even everything that will happen in the world of an individual substance on the basis of an examination of the individual substances notion, since each substance is an expression of the entire world. Leibnizs predicate-in-notion principle therefore unifies both of his two great principles of reasoning the principle of contradiction and the principle of sufficient reason since the relation between predicate and subject is either such that it is impossible for it to be otherwise or such that there is a sufficient reason why it is as it is and not otherwise. Moreover, it represents a particularly robust expression of the principle of intelligibility at the very heart of Leibnizs system. There is a reason why everything is as it is, whether that reason is subject to finite or only to infinite analysis.

(See also: 17th Century Theories of Substance.)

Rationalism is often criticized for placing too much confidence in the ability of reason alone to know the world. The extent to which one finds this criticism justified depends largely on ones view of reason. For Hume, for instance, knowledge of the world of matters of fact is gained exclusively through experience; reason is merely a faculty for comparing ideas gained through experience; it is thus parasitic upon experience, and has no claim whatsoever to grasp anything about the world itself, let alone any special claim. For Kant, reason is a mental faculty with an inherent tendency to transgress the bounds of possible experience in an effort to grasp the metaphysical foundations of the phenomenal realm. Since knowledge of the world is limited to objects of possible experience, for Kant, reason, with its delusions of grasping reality beyond those limits, must be subject to critique.

Sometimes rationalism is charged with neglecting or undervaluing experience, and with embarrassingly having no means of accounting for the tremendous success of the experimental sciences. While the criticism of the confidence placed in reason may be defensible given a certain conception of reason (which may or may not itself be ultimately defensible), the latter charge of neglecting experience is not; more often than not it is the product of a false caricature of rationalism

Descartes and Leibniz were the leading mathematicians of their day, and stood at the forefront of science. While Spinoza distinguished himself more as a political thinker, and as an interpreter of scripture (albeit a notorious one) than as a mathematician, Spinoza too performed experiments, kept abreast of the leading science of the day, and was renowned as an expert craftsman of lenses. Far from neglecting experience, the great rationalists had, in general, a sophisticated understanding of the role of experience and, indeed, of experiment, in the acquisition and development of knowledge. The fact that the rationalists held that experience and experiment cannot serve as foundations for knowledge, but must be fitted within, and interpreted in light of, a rational epistemic framework, should not be confused with a neglect of experience and experiment.

One of the stated purposes of Descartes Meditations, and, in particular, the hyperbolic doubts with which it commences, is to reveal to the mind of the reader the limitations of its reliance on the senses, which Descartes regards as an inadequate foundation for knowledge. By leading the mind away from the senses, which often deceive, and which yield only confused ideas, Descartes prepares the reader to discover the clear and distinct perceptions of the pure intellect, which provide a proper foundation for genuine knowledge. Nevertheless, empirical observations and experimentation clearly had an important role to play in Descartes natural philosophy, as evidenced by his own private empirical and experimental research, especially in optics and anatomy, and by his explicit statements in several writings on the role and importance of observation and experiment.

In Part 6 of the Discourse on the Method, Descartes makes an open plea for assistance both financial and otherwise in making systematic empirical observations and conducting experiments. Also in Discourse Part 6, Descartes lays out his program for developing knowledge of nature. It begins with the discovery of certain seeds of truth implanted naturally in our souls (CSM I, 144). From them, Descartes seeks to derive the first principles and causes of everything. Descartes Meditations illustrates these first stages of the program. By seeds of truth Descartes has in mind certain intuitions, including the ideas of thinking, and extension, and, in particular, of God. On the basis of clearly and distinctly perceiving the distinction between what belongs properly to extension (figure, position, motion) and what does not (colors, sounds, smells, and so forth), Descartes discovers the principles of physics, including the laws of motion. From these principles, it is possible to deduce many particular ways in which the details of the world might be, only a small fraction of which represent the way the world actually is. It is as a result of the distance, as it were, between physical principles and laws of nature, on one hand, and the particular details of the world, on the other, that, for Descartes, observations and experiments become necessary.

Descartes is ambivalent about the relationship between physical principles and particulars, and about the role that observation and experiment play in mediating this relationship. On the one hand, Descartes expresses commitment to the ideal of a science deduced with certainty from intuitively grasped first principles. Because of the great variety of mutually incompatible consequences that can be derived from physical principles, observation and experiment are required even in the ideal deductive science to discriminate between actual consequences and merely possible ones. According to the ideal of deductive science, however, observation and experiment should be used only to facilitate the deduction of effects from first causes, and not as a basis for an inference to possible explanations of natural phenomena, as Descartes makes clear at one point his Principles of Philosophy (CSM I, 249). If the explanations were only possible, or hypothetical, the science could not lay claim to certainty per the deductive ideal, but merely to probability.

On the other hand, Descartes states explicitly at another point in the Principles of Philosophy that the explanations provided of such phenomena as the motion of celestial bodies and the nature of the earths elements should be regarded merely as hypotheses arrived at on the basis of a posteriori reasoning (CSM I, 255); while Descartes says that such hypotheses must agree with observation and facilitate predictions, they need not in fact reflect the actual causes of phenomena. Descartes appears to concede, albeit reluctantly, that when it comes to explaining particular phenomena, hypothetical explanations and moral certainty (that is, mere probability) are all that can be hoped for.

Scholars have offered a range of explanations for the inconsistency in Descartes writings on the question of the relation between first principles and particulars. It has been suggested that the inconsistency within the Principles of Philosophy reflects different stages of its composition (see Garber 1978). However the inconsistency might be explained, it is clear that Descartes did not take it for granted that the ideal of a deductive science of nature could be realized. Moreover, whether or not Descartes ultimately believed the ideal of deductive science was realizable, he was unambiguous on the importance of observation and experiment in bridging the distance between physical principles and particular phenomena. (For further discussion, see Ren Descartes: Scientific Method.)

The one work that Spinoza published under his own name in his lifetime was his geometrical reworking of Descartes Principles of Philosophy. In Spinozas presentation of the opening sections of Part 3 of Descartes Principles, Spinoza puts a strong emphasis on the hypothetical nature of the explanations of natural phenomena in Part 3. Given the hesitance and ambivalence with which Descartes concedes the hypothetical nature of his explanations in his Principles, Spinozas unequivocal insistence on hypotheses is striking. Elsewhere Spinoza endorses hypotheses more directly. In the Treatise on the Emendation of the Intellect, Spinoza describes forming the concept of a sphere by affirming the rotation of a semicircle in thought. He points out that this idea is a true idea of a sphere even if no sphere has ever been produced this way in nature (The Collected Works of Spinoza, Vol. 1, p. 32). Spinozas view of hypotheses relates to his conception of good definitions (see Section 3, b, above). If the cause through which one conceives something allows for the deduction of all possible effects, then the cause is an adequate one, and there is no need to fear a false hypothesis. Spinoza appears to differ from Descartes in thinking that the formation of hypotheses, if done properly, is consistent with deductive certainty, and not tantamount to mere probability or moral certainty.

Again in the Treatise on the Emendation of the Intellect, Spinoza speaks in Baconian fashion of identifying aids that can assist in the use of the senses and in conducting orderly experiments. Unfortunately, Spinozas comments regarding aids are very unclear. This is perhaps explained by the fact that they appear in a work that Spinoza never finished. Nevertheless, it does seem clear that although Spinoza, like Descartes, emphasized the importance of discovering proper principles from which to deduce knowledge of everything else, he was no less aware than Descartes of the need to proceed via observation and experiment in descending from such principles to particulars. At the same time, given his analysis of the inadequacy of sensory images, the collection of empirical data must be governed by rules and rational guidelines the details of which it does not seem that Spinoza ever worked out.

A valuable perspective on Spinozas attitude toward experimentation is provided by Letter 6, which Spinoza wrote to Oldenburg with comments on Robert Boyles experimental research. Among other matters, at issue is Boyles redintegration (or reconstitution) of niter (potassium nitrate). By heating niter with a burning coal, Boyle separated the niter into a fixed part and a volatile part; he then proceeded to distill the volatile part, and recombine it with the fixed part, thereby redintegrating the niter. Boyles aim was to show that the nature of niter is not determined by a Scholastic substantial form, but rather by the composition of parts, whose secondary qualities (color, taste, smell, and so forth) are determined by primary qualities (size, position, motion, and so forth). While taking no issue with Boyles attempt to undermine the Scholastic analysis of physical natures, Spinoza criticized Boyles interpretation of the experiment, arguing that the fixed niter was merely an impurity left over, and that there was no difference between the niter and the volatile part other than a difference of state.

Two things stand out from Spinozas comments on Boyle. On the one hand, Spinoza exhibits a degree of impatience with Boyles experiments, charging some of them with superfluity on the grounds either that what they show is evident on the basis of reason alone, or that previous philosophers have already sufficiently demonstrated them experimentally. In addition, Spinozas own interpretation of Boyles experiment is primarily based in a rather speculative, Cartesian account of the mechanical constitution of niter (as Boyle himself points out in response to Spinoza). On the other hand, Spinoza appears eager to show his own fluency with experimental practice, describing no fewer than three different experiments of his own invention to support his interpretation of the redintegration. What Spinoza is critical of is not so much Boyles use of experiment per se as his relative neglect of relevant rational considerations. For instance, Spinoza at one point criticizes Boyle for trying to show that secondary qualities depend on primary qualities on experimental grounds. Spinoza thought the proposition needed to be demonstrated on rational grounds. While Spinoza acknowledges the importance and necessity of observation and experiment, his emphasis and focus is on the rational framework needed for making sense of experimental findings, without which the results are confused and misleading.

In principle, Leibniz thinks it is not impossible to discover the interior constitution of bodies a priori on the basis of a knowledge of God and the principle of the best according to which He creates the world. Leibniz sometimes remarks that angels could explain to us the intelligible causes through which all things come about, but he seems conflicted over whether such understanding is actually possible for human beings. Leibniz seems to think that while the a priori pathway should be pursued in this life by the brightest minds in any case, its perfection will only be possible in the afterlife. The obstacle to an a priori conception of things is the complexity of sensible effects. In this life, then, knowledge of nature cannot be purely a priori, but depends on observation and experimentation in conjunction with reason

Apart from perception, we have clear and distinct ideas only of magnitude, figure, motion, and other such quantifiable attributes (primary qualities). The goal of all empirical research must be to resolve phenomena (including secondary qualities) into such distinctly perceived, quantifiable notions. For example, heat is explained in terms of some particular motion of air or some other fluid. Only in this way can the epistemic ideal be achieved of understanding how phenomena follow from their causes in the same way that we know how the hammer stroke after a period of time follows from the workings of a clock (L, 173). To this end, experiments must be carried out to indicate possible relationships between secondary qualities and primary qualities, and to provide a basis for the formulation of hypotheses to explain the phenomena.

Nevertheless, there is an inherent limitation to this procedure. Leibniz explains that if there were people who had no direct experience of heat, for instance, even if someone were to explain to them the precise mechanical cause of heat, they would still not be able to know the sensation of heat, because they would still not distinctly grasp the connection between bodily motion and perception (L, 285). Leibniz seems to think that human beings will never be able to bridge the explanatory gap between sensations and mechanical causes. There will always be an irreducibly confused aspect of sensible ideas, even if they can be associated with a high degree of sophistication with distinctly perceivable, quantifiable notions. However, this limitation does not mean, for Leibniz, that there is any futility in human efforts to understand the world scientifically. In the first place, experimental knowledge of the composition of things is tremendously useful in practice, even if the composition is not distinctly perceived in all its parts. As Leibniz points out, the architect who uses stones to erect a cathedral need not possess a distinct knowledge of the bits of earth interposed between the stones (L, 175). Secondly, even if our understanding of the causes of sensible effects must remain forever hypothetical, the hypotheses themselves can be more or less refined, and it is proper experimentation that assists in their refinement.

When citing the works of Descartes, the three volume English translation by Cottingham, Stoothoff, Murdoch, and Kenny was used. For the original language, the edition by Adam and Tannery was consulted.

When citing Spinozas Ethics, the translation by Curley in A Spinoza Reader was used. The following system of abbreviation was used when citing passages from the Ethics: the first number designates the part of the Ethics (1-5); then, p is for proposition, d for definition, a for axiom, dem for demonstration, c for corollary, and s for scholium. So, 1p17s refers to the scholium of the seventeenth proposition of the first part of the Ethics. For the original language, the edition by Gebhardt was consulted.

View post:

Rationalism, Continental | Internet Encyclopedia of Philosophy

National Optical Astronomy Observatory

Image Credit: R. Hahn; Inset: NASA/JPL-Caltech

While most supernovae studied to date brighten and fade over a period of weeks, a handful are known to evolve much quicker. KSN2015K reached its maximum brightness in 2 days and faded to half that in only 7 days. To explain the rapid evolution, the discovery team, which includes NOAO astronomers Alfredo Zenteno and Chris Smith, has argued that the star bumped into itself in the explosion!

Read more in Notre Dame Press Release.

Previous home page images

More:

National Optical Astronomy Observatory

Voluntaryism – Wikipedia

This article is about the political position. For other uses, see Voluntarism.

Voluntaryism (UK: ,[1] US: ;[1] sometimes voluntarism )[2][3] is a philosophy which holds that all forms of human association should be voluntary, a term coined in this usage by Auberon Herbert in the 19th century, and gaining renewed use since the late 20th century, especially among libertarians. Its principal beliefs stem from the non-aggression principle.

Precursors to the voluntaryist movement had a long tradition in the English-speaking world, at least as far back as the Leveller movement of mid-seventeenth century England. The Leveller spokesmen John Lilburne (c. 16141657) and Richard Overton (c. 1600 c. 1660s) who “clashed with the Presbyterian puritans, who wanted to preserve a state-church with coercive powers and to deny liberty of worship to the puritan sects.”[4] The Levellers were nonconformist in religion and advocated for the separation of church and state. The church to their way of thinking was a voluntary associating of equals, and furnished a theoretical and practical model for the civil state. If it was proper for their church congregations to be based on consent, then it was proper to apply the same principle of consent to its secular counterpart. For example, the Leveller ‘large’ Petition of 1647 contained a proposal “that tythes and all other inforced maintenances, may be for ever abolished, and nothing in place thereof imposed, but that all Ministers may be payd only by those who voluntarily choose them, and contract with them for their labours.”[4] The Levellers also held to the idea of self-proprietorship.[4]

In 1843, Parliament considered legislation which would require part-time compulsory attendance at school of those children working in factories. The effective control over these schools was to be placed in the hands of the established Church of England, and the schools were to be supported largely from funds raised out of local taxation. Nonconformists, mostly Baptists and Congregationalists, became alarmed. They had been under the ban of the law for more than a century. At one time or another they could not be married in their own churches, were compelled to pay church rates against their will, and had to teach their children underground for fear of arrest. They became known as voluntaryists because they consistently rejected all state aid and interference in education, just as they rejected the state in the religious sphere of their lives. Some of the most notable voluntaryists included the young Herbert Spencer (18201903), who published his first series of articles “The Proper Sphere of Government,” beginning in 1842; his supporter Auberon Herbert, who coined the modern usage of “Voluntaryist” and established its current definition; Edward Baines, Jr., (18001890) editor and proprietor of the Leeds Mercury; and Edward Miall (18091881), Congregationalist minister, and founder-editor of The Nonconformist (1841), who wrote Views of the Voluntary Principle (1845).

The educational voluntaryists wanted free trade in education, just as they supported free trade in corn or cotton. Their concern for “liberty can scarcely be exaggerated.” They believed that “government would employ education for its own ends” (teaching habits of obedience and indoctrination), and that government-controlled schools would ultimately teach children to rely on the State for all things. Baines, for example, noted that “[w]e cannot violate the principles of liberty in regard to education without furnishing at once a precedent and inducement to violate them in regard to other matters.” Baines conceded that the then current system of education (both private and charitable) had deficiencies, but he argued that freedom should not be abridged on that account. Should freedom of the press be compromised because we have bad newspapers? “I maintain that Liberty is the chief cause of excellence; but it would cease to be Liberty if you proscribed everything inferior.”[5] The Congregational Board of Education and the Baptist Voluntary Education Society are usually given pride of place among the Voluntaryists.[6]

In southern Africa, voluntaryism in religious matters was an important part of the liberal “Responsible Government” movement of the mid-19th century, along with support for multi-racial democracy and an opposition to British imperial control. The movement was driven by powerful local leaders such as Saul Solomon and John Molteno, and when it briefly gained power it disestablished the state-supported churches in 1875.[7][8]

Although there was never an explicitly voluntaryist movement in America until the late 20th century, earlier Americans did agitate for the disestablishment of government-supported churches in several of the original thirteen states. These conscientious objectors believed mere birth in a given geographic area did not mean that one consented to membership or automatically wished to support a state church. Their objection to taxation in support of the church was two-fold: taxation not only gave the state some right of control over the church; it also represented a way of coercing the non-member or the unbeliever into supporting the church. In New England, where both Massachusetts and Connecticut started out with state churches, many people believed that they needed to pay a tax for the general support of religion for the same reasons they paid taxes to maintain the roads and the courts.

There were at least two well-known Americans who espoused voluntaryist causes during the mid-19th century. Henry David Thoreau’s (18171862) first brush with the law in his home state of Massachusetts came in 1838, when he turned twenty-one. The State demanded that he pay the one dollar ministerial tax, in support of a clergyman, “whose preaching my father attended but never I myself.”[9] When Thoreau refused to pay the tax, it was probably paid by one of his aunts. In order to avoid the ministerial tax in the future, Thoreau had to sign an affidavit attesting he was not a member of the church.

Thoreau’s overnight imprisonment for his failure to pay another municipal tax, the poll tax, to the town of Concord was recorded in his essay, “Resistance to Civil Government,” first published in 1849. It is often referred to as “On the Duty of Civil Disobedience,” because in it he concluded that government was dependent on the cooperation of its citizens. While he was not a thoroughly consistent voluntaryist, he did write that he wished never to “rely on the protection of the state,” and that he refused to tender it his allegiance so long as it supported slavery. He distinguished himself from “those who call[ed] themselves no-government men”: “I ask for, not at once no government, but at once a better government,” but this has been interpreted as a gradualist, rather than minarchist, stance[10] given that he also opened his essay by stating his belief that “That government is best which governs not at all,” a point which all voluntaryists heartily embrace.[9]

One of those “no-government men” was William Lloyd Garrison (18051879), famous abolitionist and publisher of The Liberator. Nearly all abolitionists identified with the self-ownership principle, that each person as an individual owned and should control his or her own mind and body free of outside coercive interference. The abolitionist called for the immediate and unconditional cessation of slavery because they saw slavery as man-stealing in its most direct and worst form. Slavery reflected the theft of a person’s self-ownership rights. The slave was a chattel with no rights of its own. The abolitionists realized that each human being, without exception, was naturally invested with sovereignty over him or her self and that no one could exercise forcible control over another without breaching the self-ownership principle. Garrison, too, was not a pure voluntaryist for he supported the federal government’s war against the States from 1861 to 1865.

Another one was Charles Lane (18001870). He was friendly with Amos Bronson Alcott, Ralph Waldo Emerson, and Thoreau. Between January and June 1843 a series of nine letters he penned were published in such abolitionists papers as The Liberator and The Herald of Freedom. The title under which they were published was “A Voluntary Political Government,” and in them Lane described the state in terms of institutionalized violence and referred to its “club law, its mere brigand right of a strong arm, [supported] by guns and bayonets.” He saw the coercive state on par with “forced” Christianity. “Everyone can see that the church is wrong when it comes to men with the [B]ible in one hand, and the sword in the other.” “Is it not equally diabolical for the state to do so?” Lane believed that governmental rule was only tolerated by public opinion because the fact was not yet recognized that all the true purposes of the state could be carried out on the voluntary principle, just as churches could be sustained voluntarily. Reliance on the voluntary principle could only come about through “kind, orderly, and moral means” that were consistent with the totally voluntary society he was advocating. “Let us have a voluntary State as well as a voluntary Church, and we may possibly then have some claim to the appeallation of free men.”[11]

Although use of the label “voluntaryist” waned after the death of Auberon Herbert in 1906, its use was renewed in 1982, when George H. Smith, Wendy McElroy, and Carl Watner began publishing The Voluntaryist magazine.[12] George Smith suggested use of the term to identify those libertarians who believed that political action and political parties (especially the Libertarian Party) were antithetical to their ideas. In their “Statement of Purpose” in Neither Bullets nor Ballots: Essays on Voluntaryism (1983), Watner, Smith, and McElroy explained that voluntaryists were advocates of non-political strategies to achieve a free society. They rejected electoral politics “in theory and practice as incompatible with libertarian goals,” and argued that political methods invariably strengthen the legitimacy of coercive governments. In concluding their “Statement of Purpose” they wrote: “Voluntaryists seek instead to delegitimize the State through education, and we advocate the withdrawal of the cooperation and tacit consent on which state power ultimately depends.”

Voluntaryist philosopher John Zube is known for his support and advocacy of voluntaryism. He began writing a series of articles advocating voluntaryism in the 1980s.

Go here to see the original:

Voluntaryism – Wikipedia