...23456...102030...


What is Mesothelioma? Learn About Causes, Survival Rates …

What is Mesothelioma?

Mesothelioma is a rare and aggressive disease that is known to develop over a period of 20 to 40 years. In many cases, the disease is not diagnosed until the end stage, when it is more difficult to treat. There is no cure for mesothelioma, but advanced medical treatments have allowed patients to live longer with the disease. Up to 3,000 people a year are diagnosed with mesothelioma.

Malignant mesothelioma is cancer that forms in the mesothelium, or the thin layer of cells that surround major organs. Mesothelioma is almost always caused by exposure to asbestos. There are three common locations for mesothelioma to form:

In general, the average life expectancy for mesothelioma patients is between 12 to 21 months. Some 40 percent of patients survive about a year after a diagnosis and about 20 percent live more than two years following a diagnosis. While rare, there are some patients who live longer than five years with the disease.

The patients age at diagnosis, general health and access to treatment specialists are among the many factors that go into determining a mesothelioma patients life expectancy. Other factors that play a key role are the location of the disease (pleural mesothelioma patients have better survival rates than other disease locations), cell types involved (epithelial cells respond better to treatment than other types) and stage (earlier stage disease is more responsive to treatment). Experts warn that life expectation estimations vary greatly by patient and individual circumstances.

The primary cause of any form of mesothelioma is exposure to the thin, fibrous mineral called asbestos. When asbestos fibers are inhaled, they travel through the lungs to reach the pleura, where they cause inflammation and scarring to form pleural mesothelioma. In cases of peritoneal and pericardial mesothelioma, researchers suspect asbestos fibers are ingested, travel through the lymph system or are absorbed through the skin to irritate surrounding cells. In all cases, the irritations damage cell DNA, causing cells to grow rapidly and abnormally and forming tumors.

Small studies have indicated some people are genetically predisposed to developing mesothelioma because they are more susceptible to the dangers of asbestos. Researchers are also reviewing a link between mesothelioma and Simian virus 40 (SV40), a DNA virus that contaminated early polio vaccines. There has been no definitive link between the virus and mesothelioma.

Physicians determine the stage of disease by performing numerous tests including X-rays, CT (CAT) scans, MRIs, PET scans and biopsies. It is important to determine where the cancer started and if it has spread from the point of origin for a correct disease staging. An accurate assessment of disease stage is crucial to successful treatment options.

Most physicians use a universally accepted tumor grading system to stage the disease. This allows physicians to communicate about a single patient to devise the best treatment plan. The TNM system looks at the size and growth of tumors (T), the involvement of lymph nodes (N) and the metastasis, or spread, of the disease (M). From there, the cancer is staged, with stages I and II as the early disease process and stages III and IV as the more advanced disease. Most cases of mesothelioma are diagnosed in the later stages, making treatment difficult.

About 55 percent of mid- to late-stage mesothelioma patients live six months after a diagnosis, some 40 percent survive the first year after a diagnosis and about 9 percent survive five years or longer. An overall survival rate is dependent on a number of factors including state and location of the disease, the patients age and general health and the access to treatment specialists. Long-term survivors credit lifestyle changes, alternative medicine and treatment from mesothelioma specialists as contributing factors to their success.

A recent study that looked at 20 years of survivor information, from 1992 to 2012, found pleural and peritoneal survivorship was on the rise. The study found recent advances in treatment, including hyperthermic intraperitoneal chemotherapy (HIPEC) and cytoreductive surgery, appear to have increased survival rates in peritoneal mesothelioma patients. The studys author suggested genetics, various treatment modalities and gene environment interactions might also play a part in patient longevity.

The optimal treatment approach for most mesothelioma patients is multimodal therapy which is surgery, chemotherapy and radiation. This approach, if successful, eliminates diseased tissue and allows for palliative care. Your treatment plan will depend on your diagnosis, disease stage and overall health.

For decades, all branches of the military required asbestos be used to protect service members from heat, fire and chemical threats. It was widely used in barracks, offices, vehicles and vessels. Over a period of 50 years, some 5 million veterans were exposed to asbestos in shipbuilding operations alone. About 30 percent of mesothelioma patients are U.S. military veterans. Occupations that include carpentry, construction, roofing, auto mechanics and milling are at risk for exposure to dangerous levels of asbestos.

It is estimated that more than 300 asbestos products were used on military installations and in military applications between the early 1930s and the late 1970s. More recently, soldiers serving in Iraq, Afghanistan and the Middle East may be been exposed from airborne asbestos. Companies that produced these products concealed the dangers of mesothelioma to put profits ahead of the safety and well being of our troops.

Gender, age, severity of symptoms, level of asbestos exposure, stage of disease and disease cell type play a significant role in the overall prognosis for mesothelioma patients. In addition, external factors including diet, age, stress level and general health play a role. The average pleural mesothelioma patient with late-stage disease survives about 12 months after a diagnosis, but those treated with surgery and radiation may extend their prognoses by some 28 months. Peritoneal mesothelioma patients who are treated with heated intraperitoneal chemotherapy (HIPEC) outlive their prognoses by 24 months to 7 years.

Many patients are able to improve their prognosis by seeking treatment options from a qualified mesothelioma specialist. Doctors who are practiced and trained in mesothelioma disease treatment approaches have specialized skills, education and access to crucial information that can make positive changes on long-term health.

See the original post here:

What is Mesothelioma? Learn About Causes, Survival Rates …

Mesothelioma: How Has Paul Kraus Survived For Over 20 Years?

Looking for mesothelioma information? The following section provides extensive information about mesothelioma, including symptoms, treatment, and more. Click on an item in the menu below to jump to that topic:

Mesothelioma is a rare form of cancer that develops from cells of the mesothelium, the lining that covers many internal organs. There are approximately 2,000 cases of mesothelioma diagnosed in the United States each year. Mesothelioma is caused by exposure to asbestos, a naturally-occurring carcinogen that was put into thousands of industrial and consumer products even after many companies knew that it was dangerous.

Although rare, mesothelioma cancer is not a death sentence. The worlds longest-living mesothelioma survivor wrote a free book to provide helpful insight, resources, and share his survival experiences.

Mesothelioma is a rare form of cancer, known as the asbestos caused cancer, that develops from cells of the mesothelium, the lining that covers many of the internal organs of the body.

The main purpose of the mesothelium is to produce a lubricating fluid between tissues and organs. This fluid provides a slippery and protective surface to allow movement.

For example, it allows the lungs to expand and contract smoothly inside the body each time you take a breath. When the cells of the mesothelium turn cancerous they become mesothelioma thats where the name comes from.

Mesothelioma is a rare disease and there are only approximately 2,000 cases diagnosed in the United States every year. There are many more cases diagnosed throughout the world, especially in Australia and the U.K. where large amounts of asbestos was used.

Number of cases per year in other countries:

Discover More Statistics

There are four types of malignant mesothelioma: Pleural, peritoneal, pericardial and testicular. Pleural mesothelioma affects the outer lining of the lungs and chest wall and represents about 75% of all cases. Peritoneal mesothelioma affects the abdomen and represents about 23%. Incidences of cases in the lining of the testis and the heart represent about 1% each.

Pleural mesothelioma affects the lining of the lungs

When then pleural lining around the lungs and chest wall are involved in this cancer it is called pleural mesothelioma. There are actually two layers of tissue that comprise the pleural lining. The outer layer, the parietal pleura, lines the entire inside of the chest cavity. The inner layer is called the visceral pleura and it covers the lungs.

Mesothelioma usually affects both layers of the pleura. Often it forms in one layer of the pleura and invades the other layer. The cancer may form many small tumors throughout this tissue.

Learn More About Pleural Mesothelioma

The Peritoneal Cavity surrounds the liver, stomach, intestines and reproductive organs.

When the peritoneum, the protective membrane that surrounds the abdomen is involved in this cancer it is called peritoneal mesothelioma. Just like pleural mesothelioma, there are two layers of tissues involved with the peritoneum, the parietal layer covers the abdominal cavity, while the visceral layer surrounds the stomach, liver and other organs.

The cancer often forms many small tumors throughout the tissue. One doctor has described it as if someone took a pepper shaker and scattered the pepper over the tissue.

Learn More About Peritoneal Mesothelioma

In addition to the different types of locations within the body, there are also different cell types. These types are all considered mesothelioma, but they can affect the patients prognosis.

The three mesothelioma cell types are: epithelioid, sarcomatoid and biphasic.

Epithelioid mesothelioma cells are the most common type of mesothelioma cell and has the best prognosis of the three cell types. Notice the dark purple, elongated egg shaped cells amongst the healthy pink colored tissue.

Sarcomatoid mesothelioma cells are the rarest of the three cell types and tends to be more aggressive than epitheloid cells. Notice the dark purple nodules amongst the healthy light purple colored tissue.

Biphasic mesothelioma cells are mixtures of both cell types (epithelioid and sacromatoid) and usually has a prognosis that reflects the dominant cell type.

More Symptoms

More Symptoms

Mesothelioma is caused by exposure to asbestos and it is therefore considered the asbestos caused cancer.

Asbestos has been in use since ancient times, but after the Industrial Revolution its use became widespread and was used all over the world in thousands of industrial and consumer products even after many companies knew that it was dangerous. Construction materials, automotive parts and household products such as hair dryers and oven mitts contained asbestos in the past.

Today, asbestos has been outlawed in most places around the world, however, asbestos has not been outlawed in the United States and is still found in millions of homes and public buildings, such as schools, offices and parking garages.

Learn More About Causes

Asbestos under the microscope looks like hundreds of tiny swords

Asbestos is actually a naturally occurring mineral found throughout the world. It was called the magic mineral because it is resistant to heat and corrosion. Also, it is a fiber so it can be woven into other materials.

Asbestos is composed of millions of sharp microscopic fibers. These fibers are so small that the body has difficulty filtering them out. This means that if you around airborne asbestos you may inhale it or ingest it. This is known as asbestos exposure.

The actual process as to how asbestos causes mesothelioma is still being investigated. Most scientists believe that when the small sharp fibers are ingested or inhaled they cause cell damage which can cause chronic inflammation.

This inflammation can then set the stage for disease after many years or even decades. Some scientists believe that a persons immune system may actually help prevent the cancer, even if that person is exposed to asbestos.

Find Out More On Asbestos

Since asbestos causes this rare disease, how to people get exposed to asbestos? While asbestos was in thousands of products, workers in some professions had more exposure to this carcinogen than others.

Examples of occupations that exposed workers to asbestos includes: Navy veterans, construction trades such as electricians, mechanics, and plumbers, people working in power houses and power plants, firefighters, and refinery workers. Individuals in these professions often had a multitude of asbestos containing products on their various job sites.

Most asbestos containing products were removed voluntarily by the late 1970s. However, because there is no comprehensive ban on asbestos in the U.S. and because of the long latency period, people are still being diagnosed with mesothelioma today.

Learn More About Occupational Asbestos Exposure

Old Advertisement for Asbestos Sheets

The history of asbestos in the United States and other industrialized countries is a sad story of corporate greed. Companies that produced asbestos containing products saw their workers becoming sick with lung scarring, asbestosis, and cancer nearly 100 years ago.

Some companies even brought in researchers and scientists to better understand the health impact of asbestos. Once it was shown that their magic mineral was toxic to human beings, the industry faced a dilemma.

Should they protect workers, warn consumers, notify public health officials, and most importantly, phase out this dangerous mineral? Their answer was no.

Instead industry did just the opposite. They warned no one, kept their knowledge about asbestos secret and continued to use it for decades! Only by the 1960s did independent researchers like Dr. Irving Selikoff of Mt. Sinai School of Medicinebegin to connect asbestos exposure to disease.

By then hundreds of thousands of men, women, and children were already exposed to this deadly mineral. The EPA would ban asbestos in 1989. However, the asbestos industry would sue the EPA and win.

In 1991 the ban was lifted. Even today, there is no comprehensive asbestos ban in the United States. Sad but true.

(Asbestos Medical and Legal Aspects by Barry Castleman)

Asbestos fibers cling to the clothing of workers and can be transferred to others, such as children or spouses.

People exposed directly to asbestos are called primary exposed. Sometimes the person who is primary exposed will transfer asbestos fibers from their clothes to the clothes of another person. The person who gets this transfer of asbestos exposure is said to have secondary exposure.

One example of secondary exposure is called the deadly hug. Sadly, the deadly hug happens when an adult comes home from work with asbestos on their clothes and hugs their son or daughter, unknowingly transferring the dangerous fibers to their child. There have been many cases of adults being diagnosed with mesothelioma whose only exposure to asbestos came from their time as a child.

Read About Secondary Exposure

There is a long latency period for mesothelioma which is the time from asbestos exposure to diagnosis of the cancer. This period can range anywhere from 20 to 50 years. There are different theories as to why there is such a long latency period and why most people exposed to asbestos do not get mesothelioma.

One theory suggests that there may be other variables that play a role. For example, some doctors believe that the condition or competency of a persons immune system could determine whether asbestos in their body leads to cancer.

Other possibilities include a persons genes and diet.

When doctors suspect a patient has mesothelioma they will initiate a work-upin order to make a diagnosis. This work-up may include imaging scans, biopsies, pathology exams, blood tests and staging.

Diagnosing Mesothelioma

Various types of scans may be used to determine if there are signs of tumors or other abnormalities. These scans may include X-rays, CT scans, MRIs, or PET scans.

More on Imaging

If scans reveal what doctors believe may be a cancer then a biopsy may be suggested. A biopsy is a procedure where doctors remove a small piece of the suspected tumor tissue from the patients body.

More on Biopsies

Blood tests and biomarkers may sometimes be used to determine if mesothelioma is present in the body. While these tests are helpful they are not considered as important as the biopsy which is considered the gold standard.

More on Biomarkers

The biopsy material will then be given to a pathologist. A pathologist will use special stains and other tests to determine if there is cancer and identify exactly what type of cancer was removed from the patient.

More on Pathology Exams

If mesothelioma is diagnosed, doctors may stage the disease. Over the years a variety of staging systems have been used. The one used most frequently today groups the disease into localized (only in the mesothelium) or advanced (spread outside the mesothelium).

More on Staging

The prognosis of mesothelioma or any other cancer depends on a number of variables. Those variables include:

More on Prognosis

A doctor specializing in mesothelioma can properly diagnose you and determine the best course of treatment. Find a mesothelioma specialist or doctor near you.

The treatments for mesothelioma can be divided into three paths: Conventional Therapies, Clinical Trials, and Alternative Modalities.

Conventional therapies include chemotherapy, radiation therapy and surgery. The standard chemo drugs used are Alimta (pemetrexed) and cisplatin (or carboplatin). They are often prescribed for the various types of mesothelioma, regardless of location. Both chemo and radiation therapy are known as cytotoxic or cell killing therapies. They work indiscriminately, killing both healthy and cancer cells. This is the reason that they can have severe side effects.

Learn About Treatment

The standard of care in many hospitals is to treat peritoneal mesothelioma with surgery and HIPEC. HIPEC stands for hyperthermic intraperitoneal perioperative chemotherapy which basically means flushing the surgical area with heated chemotherapy during the surgical procedure. The obvious advantage of this approach is that it enables doctors to put the chemo in exactly the place it needs to be.

Of all the conventional treatments available, surgery is generally considered the most effective. For pleural mesothelioma, there are various types of surgical procedures, including lung sparring surgery (also called pleurectomy/decorticiaton or PD) and extrapleural pneumonectomy (also called EPP).

Pleurectomy/decortication surgery is a two-part surgery that removes the lining surrounding one lung (pleurectomy), then removes any visible cancer seen growing inside the chest cavity (decortication). The advantage of P/D or lung sparring surgery is exactly what the name implies a lung is not removed.

An extrapleural pneumonectomy (EPP) is a much more invasive surgery than PD. An EPP involves removing a lung, the diaphragm, portions of the chest lining and heart lining, and nearby lymph nodes.

Numerous studies have been performed comparing the prognosis with a pleurectomy/decortications surgery versus an extrapleural pneumonectomy. While there is no consensus on the subject, the latest reports suggest that PD may be a better choice for many patients because survival is generally equivalent to EPP and PD is less invasive and therefore easier to tolerate.

There are also other surgical procedures used to treat pleural effusion. Pleural effusion is the buildup of excess fluid in the pleural space between the visceral and parietal linings of the lungs. Examples of these procedures include pleurodesis and thoracentesis.

More on Surgery

Clinical trials are treatments that are still being tested. These treatments may include chemotherapy or other more innovative approaches based on immune therapy, gene therapy or other biological approaches. One example of new treatments being tried in mesothelioma involve the use of monoclonal antibodies. Monoclonal antibodies are essentially an immune system therapy that tries to use antibodies to target cancer cells. The National Cancer Institute indexes clinical trials offered throughout the country.

Discover Clinical Trials

Alternative modalities include a large number of approaches such as intravenous vitamin therapy, herbs and Traditional Chinese Medicine, cannabis oil, dietary approaches, and mind-body medicine. It is important to note that while none of these modalities are FDA approved, there are a number of long-term mesothelioma survivors who have used them, including Paul Kraus.

Read About Alternative Treatments

Mesothelioma is not the only disease caused by asbestos. Asbestosis which is essentially scarred lung tissue, pleural plaques and some lung cancers can also be caused by asbestos. There may also be compensation available to victims of these diseases as well. Treatments vary by condition.

Learn About Other Asbestos Diseases

Factors such as multi-drug resistance, therapy related side effects, and disease recurrence after therapy have all been implicated as problems that prevent successful treatment of malignant mesothelioma. However, recent scientific evidence suggests that some common dietary phytochemicals, such as curcumin and quercetin, may have the ability to regulate microRNAs associated with malignant mesothelioma and possibly inhibit the cancer by regulating the expression of various genes which are known to be aberrant in malignant mesothelioma.

See the original post:

Mesothelioma: How Has Paul Kraus Survived For Over 20 Years?

Mesothelioma Life Expectancy | How Long Do Patients Live?

How Can I Improve My Mesothelioma Life Expectancy?

Being proactive about your health is a great first step toward improving your mesothelioma life expectancy after a diagnosis.

In addition to seeking traditional mesothelioma treatments immediately, there are a few steps you can take to improve your mesothelioma life expectancy.

The first step you should take is to seek legal advice so you can begin pursuing the compensation you deserve to afford the treatment you need.

Contact us today to learn about your options for mesothelioma compensation.

Other options to explore as you work to improve your mesothelioma life expectancy include:

Malignant mesothelioma, caused by exposure to airborne asbestos fibers, is an incurable cancer involving the lining of the lung, abdomen, or heart.

The latency period, the time between asbestos exposure and diagnosis, can be decades long. For many patients diagnosed 15 to 60 years after their initial exposure to asbestos, the disease is already in an advanced phase when they begin to suffer symptoms of shortness of breath and chest pain.

At this late stage of diagnosis, the average survival time is less than a year.

Although there are many factors doctors look at to determine a patients prognosis and mesothelioma life expectancy, doctors, patients, and cancer advocates are now emphasizing the importance of early detection.They all agree that in order to increase the effectiveness of treatment options leading to an increased survival time, early detection is critical. In fact, the American Cancer Society states that if you cant prevent cancer, the next best thing you can do to protect your health is to detect it early.

According to the American Thoracic Society, malignant mesothelioma is a fatal disease with median survival time of less than 12 months from first signs of illness of death.

However, some studies have shown that among patients where it is diagnosed early and treated aggressively, about half can expect a mesothelioma life expectancy of two years, and one-fifth will have a mesothelioma life expectancy of five years.

As a comparison, for patients whose mesothelioma is advanced, only five percent can expect to live another five years.

Early diagnosis of the cancer often means that the cancer will be localized, with the cancer cells found only at the body site where the cancer originated.

The localized cancer would be identified as Stage 1 and can involve a surgically removable tumor. Once the cancer cells have spread beyond that original location, the mesothelioma is considered advanced and surgery is often no longer an option.

The importance of early diagnosis of this cancer cannot be overemphasized. Treating a limited area of cancer is easier, and includes more treatment options, than trying to treat cancer that has spread, or metastasized, to several sites or throughout the body.

Mesothelioma is typically diagnosed within three to six months of the first visit to a doctor with complaints about breathing problems or chest and abdominal pain.

Anyone who has worked around asbestos is urged to see a physician for screening for malignant cancer. Screening methods are advancing, and various blood tests now exist that may identify mesothelioma.

The blood tests focus on a protein in the blood that is released into the blood stream by cells. One test checks for a protein known as SMRP, or soluble mesothelin-related peptide.

The biomarker measures the amount of SMRP in a persons blood. Abnormally high levels may indicate the presence of mesothelioma.

Early diagnosis can improve life expectancy.

However, the following factors for a mesothelioma diagnosis are all important when assessing life expectancy:

Researchers at the Mayo Clinic add quality of life prior to a diagnosis to the list of increased survival. The researchers found that patients who deemed their quality of life highest among other lung cancer patients lived significantly longer.

The American Cancer Society encourages cancer survivors to focus on healthy behaviors including exercise, diet, and not smoking to limit the risk of mesothelioma recurrence and for improved quality of life.

The younger the better. Many studies report that younger, fit patients have a higher mesothelioma life expectancy than their older counterparts when diagnosed with cancer.

Younger patients are generally healthier overall, which points to encouraging Americans to live a healthy lifestyle in order to combat mesothelioma.

The primary types of mesothelioma are pleural, involving the lung, and peritoneal, involving the abdomen.

Pleural mesothelioma patients typically have a shorter mesothelioma life expectancy than peritoneal patients. According to statistics, 80 percent of the mesothelioma cases are pleural, with close to 20 percent peritoneal cases.

Pericardial, which occurs in the lining around the heart, is extremely rare, representing less than one percent of all mesothelioma cases.

There are three types of cells that appear in mesothelioma: epithelioid, sarcomatoid, and biphasic.

Epithelial cells

These cells protect and surround organs. When they are invaded by mesothelioma, they form tumors that can be removed with surgery or treated with radiation, chemotherapy, or a combination of the three.

Mesothelioma cases often include a malignant epithelial tumor. There are 20 kinds of epithelial mesothelioma cells. Some are associated with a specific type of mesothelioma. Others are found in all forms of the disease.

Sarcomatoid cells

Are made up of cancerous cells that can include epithelial cells. Sarcomatoid cells are hard to tell apart from healthy tissues. They spread quickly and are the most difficult to treat.

There are three kinds of sarcomatoid cells associated with mesothelioma: transitional, lymphohistiocytosis, and desmoplastic. They are found in all three types of mesothelioma.

Biphasic cells

Are the second most-common found in mesothelioma patients. These cells are most often present in pleural patients.

They may include elements of epithelial cells and sarcomatoid cells; for this reason, treatment and patient survival time frames will vary. Treatment is also based on the stage, size and location of the tumor.

Unlike many other predominantly pulmonary-related cancers, cigarette smoking has no known causative effect on pleural mesothelioma incidence.

However, statistics show that smoking accounts for 90 percent of lung cancer cases and 85 percent of head and neck cancers. Smoking cessation is one of the primary ways to prevent lung disease.

The effectiveness of treatment for mesothelioma patients can be complicated if patients continue to smoke.

Patients with few to no additional health complications may have a longer survival than those with other health issues such as diabetes and high blood pressure.

Patients with other chronic conditions must carefully monitor their health and medications to prevent complications from arising. When they are then diagnosed with mesothelioma, it is important that the medical team and patients work closely together to monitor drug interactions and proper nutrition.

The American Cancer Society reports the following median survival time of patients with pleural mesothelioma who were treated with surgery to cure the cancer.

The numbers include the relative five-year survival rate and median survival. The ACS adds that survival times tend to be longer for patients treated with surgery.

Patients who are not eligible for surgery often have cancer that has metastasized.

Sources & Author:

Read the original post:

Mesothelioma Life Expectancy | How Long Do Patients Live?

Shattered Citadel: WW3 Sci Fi

They Bleed Like Us

Mankind still recovering from WW3, goes to war against a hostile alien race. SC story.

Battle of Taiwan

The SC video that started it all. The spark that started WW3, the invasion of Taiwan.

SC Pic of the Week

A new creator drawn picture or map based on SC posted every Sunday.

New SC WebsiteRedesign

The old SC website is now gone and a new one was created. It took forever, but thank you for sticking with us!

Shattered Citadel on WATTPAD

SC is now on Wattpad. This is the direct link to our page there so you can read SC on the go. For FREE!

Support SC on Patreon

SC isn’t my primary focus. I am starting a Patreon to hopefully give me more time to work on SC and pay for its many costs.

Timeline of WW3(Complete)

The full WW3 timeline in four videos. A highly detailed breakdown of World War 3. Set in the SC sci fi universe,

Submit E-Mail to join the SC newsletter for updates, new content, and news.

Read more:

Shattered Citadel: WW3 Sci Fi

Posted in Ww3

What Does Free Speech Mean? | United States Courts

Main content

Among other cherished values, the First Amendment protects freedom of speech. The U.S. Supreme Court often has struggled to determine what exactly constitutes protected speech. The following are examples of speech, both direct (words) and symbolic (actions), that the Court has decided are either entitled to First Amendment protections, or not.

The First Amendment states, in relevant part, that:

Congress shall make no law…abridging freedom of speech.

View post:

What Does Free Speech Mean? | United States Courts

Wage slavery – Wikipedia

Wage slavery is a term used to draw an analogy between slavery and wage labor by focusing on similarities between owning and renting a person. It is usually used to refer to a situation where a person’s livelihood depends on wages or a salary, especially when the dependence is total and immediate.[1][2]

The term “wage slavery” has been used to criticize exploitation of labour and social stratification, with the former seen primarily as unequal bargaining power between labor and capital (particularly when workers are paid comparatively low wages, e.g. in sweatshops)[3] and the latter as a lack of workers’ self-management, fulfilling job choices and leisure in an economy.[4][5][6] The criticism of social stratification covers a wider range of employment choices bound by the pressures of a hierarchical society to perform otherwise unfulfilling work that deprives humans of their “species character”[7] not only under threat of starvation or poverty, but also of social stigma and status diminution.[8][9][10]

Similarities between wage labor and slavery were noted as early as Cicero in Ancient Rome, such as in De Officiis.[11] With the advent of the Industrial Revolution, thinkers such as Pierre-Joseph Proudhon and Karl Marx elaborated the comparison between wage labor and slavery,[12][13] while Luddites emphasized the dehumanization brought about by machines. Before the American Civil War, Southern defenders of African American slavery invoked the concept of wage slavery to favorably compare the condition of their slaves to workers in the North.[14][15] The United States abolished slavery after the Civil War, but labor union activists found the metaphor useful and appropriate. According to Lawrence Glickman, in the Gilded Age “[r]eferences abounded in the labor press, and it is hard to find a speech by a labor leader without the phrase”.[16]

The introduction of wage labor in 18th-century Britain was met with resistance, giving rise to the principles of syndicalism.[17][18][19][20] Historically, some labor organizations and individual social activists have espoused workers’ self-management or worker cooperatives as possible alternatives to wage labor.[5][19]

The view that working for wages is akin to slavery dates back to the ancient world.[22] In ancient Rome, Cicero wrote that “whoever gives his labor for money sells himself and puts himself in the rank of slaves”.[11]

In 1763, the French journalist Simon Linguet published an influential description of wage slavery:[13]

The slave was precious to his master because of the money he had cost him… They were worth at least as much as they could be sold for in the market… It is the impossibility of living by any other means that compels our farm labourers to till the soil whose fruits they will not eat and our masons to construct buildings in which they will not live… It is want that compels them to go down on their knees to the rich man in order to get from him permission to enrich him… what effective gain [has] the suppression of slavery brought [him?] He is free, you say. Ah! That is his misfortune… These men… [have] the most terrible, the most imperious of masters, that is, need…. They must therefore find someone to hire them, or die of hunger. Is that to be free?

The view that wage work has substantial similarities with chattel slavery was actively put forward in the late 18th and 19th centuries by defenders of chattel slavery (most notably in the Southern states of the United States) and by opponents of capitalism (who were also critics of chattel slavery).[9][23] Some defenders of slavery, mainly from the Southern slave states, argued that Northern workers were “free but in name the slaves of endless toil” and that their slaves were better off.[24][25] This contention has been partly corroborated by some modern studies that indicate slaves’ material conditions in the 19th century were “better than what was typically available to free urban laborers at the time”.[26][27] In this period, Henry David Thoreau wrote that “[i]t is hard to have a Southern overseer; it is worse to have a Northern one; but worst of all when you are the slave-driver of yourself”.[28]

Some abolitionists in the United States regarded the analogy as spurious.[29] They believed that wage workers were “neither wronged nor oppressed”.[30] Abraham Lincoln and the Republicans argued that the condition of wage workers was different from slavery as laborers were likely to have the opportunity to work for themselves in the future, achieving self-employment.[31] The abolitionist and former slave Frederick Douglass initially declared “now I am my own master”, upon taking a paying job.[32] However, later in life he concluded to the contrary, saying “experience demonstrates that there may be a slavery of wages only a little less galling and crushing in its effects than chattel slavery, and that this slavery of wages must go down with the other”.[33][34] Douglass went on to speak about these conditions as arising from the unequal bargaining power between the ownership/capitalist class and the non-ownership/laborer class within a compulsory monetary market: “No more crafty and effective devise for defrauding the southern laborers could be adopted than the one that substitutes orders upon shopkeepers for currency in payment of wages. It has the merit of a show of honesty, while it puts the laborer completely at the mercy of the land-owner and the shopkeeper”.[35]

Self-employment became less common as the artisan tradition slowly disappeared in the later part of the 19th century.[5] In 1869, The New York Times described the system of wage labor as “a system of slavery as absolute if not as degrading as that which lately prevailed at the South”.[31] E. P. Thompson notes that for British workers at the end of the 18th and beginning of the 19th centuries, the “gap in status between a ‘servant,’ a hired wage-laborer subject to the orders and discipline of the master, and an artisan, who might ‘come and go’ as he pleased, was wide enough for men to shed blood rather than allow themselves to be pushed from one side to the other. And, in the value system of the community, those who resisted degradation were in the right”.[17] A “Member of the Builders’ Union” in the 1830s argued that the trade unions “will not only strike for less work, and more wages, but will ultimately abolish wages, become their own masters and work for each other; labor and capital will no longer be separate but will be indissolubly joined together in the hands of workmen and work-women”.[18] This perspective inspired the Grand National Consolidated Trades Union of 1834 which had the “two-fold purpose of syndicalist unions the protection of the workers under the existing system and the formation of the nuclei of the future society” when the unions “take over the whole industry of the country”.[19] “Research has shown”, summarises William Lazonick, “that the ‘free-born Englishman’ of the eighteenth century even those who, by force of circumstance, had to submit to agricultural wage labour tenaciously resisted entry into the capitalist workshop”.[20]

The use of the term “wage slave” by labor organizations may originate from the labor protests of the Lowell Mill Girls in 1836.[36] The imagery of wage slavery was widely used by labor organizations during the mid-19th century to object to the lack of workers’ self-management. However, it was gradually replaced by the more neutral term “wage work” towards the end of the 19th century as labor organizations shifted their focus to raising wages.[5]

Karl Marx described capitalist society as infringing on individual autonomy because it is based on a materialistic and commodified concept of the body and its liberty (i.e. as something that is sold, rented, or alienated in a class society). According to Friedrich Engels:[37][38]

The slave is sold once and for all; the proletarian must sell himself daily and hourly. The individual slave, property of one master, is assured an existence, however miserable it may be, because of the master’s interest. The individual proletarian, property as it were of the entire bourgeois class which buys his labor only when someone has need of it, has no secure existence.

Critics of wage work have drawn several similarities between wage work and slavery:

According to American anarcho-syndicalist philosopher Noam Chomsky, the similarities between chattel and wage slavery were noticed by the workers themselves. He noted that the 19th-century Lowell Mill Girls, who without any reported knowledge of European Marxism or anarchism condemned the “degradation and subordination” of the newly emerging industrial system and the “new spirit of the age: gain wealth, forgetting all but self”, maintaining that “those who work in the mills should own them”.[44][45] They expressed their concerns in a protest song during their 1836 strike:

Oh! isn’t it a pity, such a pretty girl as IShould be sent to the factory to pine away and die?Oh! I cannot be a slave, I will not be a slave,For I’m so fond of liberty,That I cannot be a slave.[46]

Defenses of wage labor and chattel slavery in the literature have linked the subjection of man to man with the subjection of man to nature arguing that hierarchy and a social system’s particular relations of production represent human nature and are no more coercive than the reality of life itself. According to this narrative, any well-intentioned attempt to fundamentally change the status quo is naively utopian and will result in more oppressive conditions.[47] Bosses in both of these long-lasting systems argued that their system created a lot of wealth and prosperity. In some sense, both did create jobs and their investment entailed risk. For example, slave owners risked losing money by buying chattel slaves who later became ill or died; while bosses risked losing money by hiring workers (wage slaves) to make products that did not sell well on the market. Marginally, both chattel and wage slaves may become bosses; sometimes by working hard. It may be the “rags to riches” story which occasionally occurs in capitalism, or the “slave to master” story that occurred in places like colonial Brazil, where slaves could buy their own freedom and become business owners, self-employed, or slave owners themselves.[48] Social mobility, or the hard work and risk that it may entail, are thus not considered to be a redeeming factor by critics of the concept of wage slavery.[49]

Anthropologist David Graeber has noted that historically the first wage labor contracts we know about whether in ancient Greece or Rome, or in the Malay or Swahili city states in the Indian Ocean were in fact contracts for the rental of chattel slaves (usually the owner would receive a share of the money and the slave another, with which to maintain his or her living expenses). According to Graeber, such arrangements were quite common in New World slavery as well, whether in the United States or Brazil. C. L. R. James argued that most of the techniques of human organization employed on factory workers during the Industrial Revolution were first developed on slave plantations.[50]

The usage of the term “wage slavery” shifted to “wage work” at the end of the 19th century as groups like the Knights of Labor and American Federation of Labor shifted to a more reformist, trade union ideology instead of worker’s self-management. Much of the decline was caused by the rapid increase in manufacturing after the Industrial Revolution and the subsequent dominance of wage labor as a result. Another factor was immigration and demographic changes that led to ethnic tension between the workers.[5]

As Hallgrimsdottir and Benoit point out:

[I]ncreased centralization of production… declining wages… [an] expanding… labor pool… intensifying competition, and… [t]he loss of competence and independence experienced by skilled labor” meant that “a critique that referred to all [wage] work as slavery and avoided demands for wage concessions in favor of supporting the creation of the producerist republic (by diverting strike funds towards funding… co-operatives, for example) was far less compelling than one that identified the specific conditions of slavery as low wages.[5]

Some anti-capitalist thinkers claim that the elite maintain wage slavery and a divided working class through their influence over the media and entertainment industry,[51][52] educational institutions, unjust laws, nationalist and corporate propaganda, pressures and incentives to internalize values serviceable to the power structure, state violence, fear of unemployment,[53] and a historical legacy of exploitation and profit accumulation/transfer under prior systems, which shaped the development of economic theory. Adam Smith noted that employers often conspire together to keep wages low and have the upper hand in conflicts between workers and employers:[54]

The interest of the dealers… in any particular branch of trade or manufactures, is always in some respects different from, and even opposite to, that of the public… [They] have generally an interest to deceive and even to oppress the public… We rarely hear, it has been said, of the combinations of masters, though frequently of those of workmen. But whoever imagines, upon this account, that masters rarely combine, is as ignorant of the world as of the subject. Masters are always and everywhere in a sort of tacit, but constant and uniform combination, not to raise the wages of labor above their actual rate… It is not, however, difficult to foresee which of the two parties must, upon all ordinary occasions, have the advantage in the dispute, and force the other into a compliance with their terms.

The concept of wage slavery could conceivably be traced back to pre-capitalist figures like Gerrard Winstanley from the radical Christian Diggers movement in England, who wrote in his 1649 pamphlet, The New Law of Righteousness, that there “shall be no buying or selling, no fairs nor markets, but the whole earth shall be a common treasury for every man” and “there shall be none Lord over others, but every one shall be a Lord of himself”.[55]

Aristotle stated that “the citizens must not live a mechanic or a mercantile life (for such a life is ignoble and inimical to virtue), nor yet must those who are to be citizens in the best state be tillers of the soil (for leisure is needed both for the development of virtue and for active participation in politics)”,[56] often paraphrased as “all paid jobs absorb and degrade the mind”.[57] Cicero wrote in 44 BC that “vulgar are the means of livelihood of all hired workmen whom we pay for mere manual labour, not for artistic skill; for in their case the very wage they receive is a pledge of their slavery”.[11] Somewhat similar criticisms have also been expressed by some proponents of liberalism, like Silvio Gesell and Thomas Paine;[58] Henry George, who inspired the economic philosophy known as Georgism;[9] and the Distributist school of thought within the Catholic Church.

To Karl Marx and anarchist thinkers like Mikhail Bakunin and Peter Kropotkin, wage slavery was a class condition in place due to the existence of private property and the state. This class situation rested primarily on:

And secondarily on:

Fascist economic policies were more hostile to independent trade unions than modern economies in Europe or the United States.[60] Fascism was more widely accepted in the 1920s and 1930s, and foreign corporate investment (notably from the United States) in Italy and Germany increased after the fascists took power.[61][62]

Fascism has been perceived by some notable critics, like Buenaventura Durruti, to be a last resort weapon of the privileged to ensure the maintenance of wage slavery:

No government fights fascism to destroy it. When the bourgeoisie sees that power is slipping out of its hands, it brings up fascism to hold onto their privileges.[63]

According to Noam Chomsky, analysis of the psychological implications of wage slavery goes back to the Enlightenment era. In his 1791 book The Limits of State Action, classical liberal thinker Wilhelm von Humboldt explained how “whatever does not spring from a man’s free choice, or is only the result of instruction and guidance, does not enter into his very nature; he does not perform it with truly human energies, but merely with mechanical exactness” and so when the laborer works under external control, “we may admire what he does, but we despise what he is”.[64] Because they explore human authority and obedience, both the Milgram and Stanford experiments have been found useful in the psychological study of wage-based workplace relations.[65]

According to research[citation needed], modern work provides people with a sense of personal and social identity that is tied to:

Thus job loss entails the loss of this identity.[66]

Erich Fromm argued that if a person perceives himself as being what he owns, then when that person loses (or even thinks of losing) what he “owns” (e.g. the good looks or sharp mind that allow him to sell his labor for high wages) a fear of loss may create anxiety and authoritarian tendencies because that person’s sense of identity is threatened. In contrast, when a person’s sense of self is based on what he experiences in a state of being (creativity, love, sadness, taste, sight and the like) with a less materialistic regard for what he once had and lost, or may lose, then less authoritarian tendencies prevail. In his view, the state of being flourishes under a worker-managed workplace and economy, whereas self-ownership entails a materialistic notion of self, created to rationalize the lack of worker control that would allow for a state of being.[67]

Investigative journalist Robert Kuttner analyzed the work of public-health scholars Jeffrey Johnson and Ellen Hall about modern conditions of work and concludes that “to be in a life situation where one experiences relentless demands by others, over which one has relatively little control, is to be at risk of poor health, physically as well as mentally”. Under wage labor, “a relatively small elite demands and gets empowerment, self-actualization, autonomy, and other work satisfaction that partially compensate for long hours” while “epidemiological data confirm that lower-paid, lower-status workers are more likely to experience the most clinically damaging forms of stress, in part because they have less control over their work”.[68]

Wage slavery and the educational system that precedes it “implies power held by the leader. Without power the leader is inept. The possession of power inevitably leads to corruption… in spite of… good intentions… [Leadership means] power of initiative, this sense of responsibility, the self-respect which comes from expressed manhood, is taken from the men, and consolidated in the leader. The sum of their initiative, their responsibility, their self-respect becomes his… [and the] order and system he maintains is based upon the suppression of the men, from being independent thinkers into being ‘the men’… In a word, he is compelled to become an autocrat and a foe to democracy”. For the “leader”, such marginalisation can be beneficial, for a leader “sees no need for any high level of intelligence in the rank and file, except to applaud his actions. Indeed such intelligence from his point of view, by breeding criticism and opposition, is an obstacle and causes confusion”.[69] Wage slavery “implies erosion of the human personality… [because] some men submit to the will of others, arousing in these instincts which predispose them to cruelty and indifference in the face of the suffering of their fellows”.[70]

In 19th-century discussions of labor relations, it was normally assumed that the threat of starvation forced those without property to work for wages. Proponents of the view that modern forms of employment constitute wage slavery, even when workers appear to have a range of available alternatives, have attributed its perpetuation to a variety of social factors that maintain the hegemony of the employer class.[43][71]

In an account of the Lowell Mill Girls, Harriet Hanson Robinson wrote that generously high wages were offered to overcome the degrading nature of the work:

At the time the Lowell cotton mills were started the caste of the factory girl was the lowest among the employments of women…. She was represented as subjected to influences that must destroy her purity and selfrespect. In the eyes of her overseer she was but a brute, a slave, to be beaten, pinched and pushed about. It was to overcome this prejudice that such high wages had been offered to women that they might be induced to become millgirls, in spite of the opprobrium that still clung to this degrading occupation.[72]

In his book Disciplined Minds, Jeff Schmidt points out that professionals are trusted to run organizations in the interests of their employers. Because employers cannot be on hand to manage every decision, professionals are trained to “ensure that each and every detail of their work favors the right interestsor skewers the disfavored ones” in the absence of overt control:

The resulting professional is an obedient thinker, an intellectual property whom employers can trust to experiment, theorize, innovate and create safely within the confines of an assigned ideology.[73]

Parecon (participatory economics) theory posits a social class “between labor and capital” of higher paid professionals such as “doctors, lawyers, engineers, managers and others” who monopolize empowering labor and constitute a class above wage laborers who do mostly “obedient, rote work”.[74]

The terms “employee” or “worker” have often been replaced by “associate”. This plays up the allegedly voluntary nature of the interaction while playing down the subordinate status of the wage laborer as well as the worker-boss class distinction emphasized by labor movements. Billboards as well as television, Internet and newspaper advertisements consistently show low-wage workers with smiles on their faces, appearing happy.[75]

Job interviews and other data on requirements for lower skilled workers in developed countries particularly in the growing service sector indicate that the more workers depend on low wages and the less skilled or desirable their job is, the more employers screen for workers without better employment options and expect them to feign unremunerative motivation.[76] Such screening and feigning may not only contribute to the positive self-image of the employer as someone granting desirable employment, but also signal wage-dependence by indicating the employee’s willingness to feign, which in turn may discourage the dissatisfaction normally associated with job-switching or union activity.[76]

At the same time, employers in the service industry have justified unstable, part-time employment and low wages by playing down the importance of service jobs for the lives of the wage laborers (e.g. just temporary before finding something better, student summer jobs and the like).[77][78]

In the early 20th century, “scientific methods of strikebreaking”[79] were devised employing a variety of tactics that emphasized how strikes undermined “harmony” and “Americanism”.[80]

Some social activists objecting to the market system or price system of wage working historically have considered syndicalism, worker cooperatives, workers’ self-management and workers’ control as possible alternatives to the current wage system.[4][5][6][19]

The American philosopher John Dewey believed that until “industrial feudalism” is replaced by “industrial democracy”, politics will be “the shadow cast on society by big business”.[81] Thomas Ferguson has postulated in his investment theory of party competition that the undemocratic nature of economic institutions under capitalism causes elections to become occasions when blocs of investors coalesce and compete to control the state.[82]

Noam Chomsky has argued that political theory tends to blur the ‘elite’ function of government:

Modern political theory stresses Madison’s belief that “in a just and a free government the rights both of property and of persons ought to be effectually guarded.” But in this case too it is useful to look at the doctrine more carefully. There are no rights of property, only rights to property that is, rights of persons with property,…

[In] representative democracy, as in, say, the United States or Great Britain [] there is a monopoly of power centralized in the state, and secondly and critically [] the representative democracy is limited to the political sphere and in no serious way encroaches on the economic sphere [] That is, as long as individuals are compelled to rent themselves on the market to those who are willing to hire them, as long as their role in production is simply that of ancillary tools, then there are striking elements of coercion and oppression that make talk of democracy very limited, if even meaningful.[83]

In this regard, Chomsky has used Bakunin’s theories about an “instinct for freedom”,[84] the militant history of labor movements, Kropotkin’s mutual aid evolutionary principle of survival and Marc Hauser’s theories supporting an innate and universal moral faculty,[85] to explain the incompatibility of oppression with certain aspects of human nature.[86][87]

Loyola University philosophy professor John Clark and libertarian socialist philosopher Murray Bookchin have criticized the system of wage labor for encouraging environmental destruction, arguing that a self-managed industrial society would better manage the environment. Like other anarchists,[88] they attribute much of the Industrial Revolution’s pollution to the “hierarchical” and “competitive” economic relations accompanying it.[89]

Some criticize wage slavery on strictly contractual grounds, e.g. David Ellerman and Carole Pateman, arguing that the employment contract is a legal fiction in that it treats human beings juridically as mere tools or inputs by abdicating responsibility and self-determination, which the critics argue are inalienable. As Ellerman points out, “[t]he employee is legally transformed from being a co-responsible partner to being only an input supplier sharing no legal responsibility for either the input liabilities [costs] or the produced outputs [revenue, profits] of the employer’s business”.[90] Such contracts are inherently invalid “since the person remain[s] a de facto fully capacitated adult person with only the contractual role of a non-person” as it is impossible to physically transfer self-determination.[91] As Pateman argues:

The contractarian argument is unassailable all the time it is accepted that abilities can ‘acquire’ an external relation to an individual, and can be treated as if they were property. To treat abilities in this manner is also implicitly to accept that the ‘exchange’ between employer and worker is like any other exchange of material property … The answer to the question of how property in the person can be contracted out is that no such procedure is possible. Labour power, capacities or services, cannot be separated from the person of the worker like pieces of property.[92]

In a modern liberal capitalist society, the employment contract is enforced while the enslavement contract is not; the former being considered valid because of its consensual/non-coercive nature and the latter being considered inherently invalid, consensual or not. The noted economist Paul Samuelson described this discrepancy:

Since slavery was abolished, human earning power is forbidden by law to be capitalized. A man is not even free to sell himself; he must rent himself at a wage.[93]

Some advocates of right-libertarianism, among them philosopher Robert Nozick, address this inconsistency in modern societies arguing that a consistently libertarian society would allow and regard as valid consensual/non-coercive enslavement contracts, rejecting the notion of inalienable rights:

The comparable question about an individual is whether a free system will allow him to sell himself into slavery. I believe that it would.[94]

Others like Murray Rothbard allow for the possibility of debt slavery, asserting that a lifetime labour contract can be broken so long as the slave pays appropriate damages:

[I]f A has agreed to work for life for B in exchange for 10,000 grams of gold, he will have to return the proportionate amount of property if he terminates the arrangement and ceases to work.[95]

In the philosophy of mainstream, neoclassical economics, wage labor is seen as the voluntary sale of one’s own time and efforts, just like a carpenter would sell a chair, or a farmer would sell wheat. It is considered neither an antagonistic nor abusive relationship and carries no particular moral implications.[96]

Austrian economics argues that a person is not “free” unless they can sell their labor because otherwise that person has no self-ownership and will be owned by a “third party” of individuals.[97]

Post-Keynesian economics perceives wage slavery as resulting from inequality of bargaining power between labor and capital, which exists when the economy does not “allow labor to organize and form a strong countervailing force”.[98]

The two main forms of socialist economics perceive wage slavery differently:

Link:

Wage slavery – Wikipedia

8 Signs You’re a Slave Instead of an Employee

Literal slavery is a horrible practice that still persists into the modern age. But, I want to talk about another form of human exploitationemployment slavery, which can also ruin a persons life. Generally, I consider this a self-inflicted slavery because its ultimately a persons choice to work under such conditionsbut I also understand that brainwashing can occur, creating the illusion that theres no way out.

Slavery (in general) exists because of the inclination among people to obtain the benefits of human resources, while providing little (or nothing) in return. Human work is the most intelligent, efficient way to create a system of wealth and power. For the morally bankrupt, such benefits are sought for free.

Employment, in the best case scenario, is a business deal of mutual benefit. But in other instances, the company is expending such minimal resources that they are taking advantage of you. In the worst case scenario, through a combination of slave-driving principles and psychological techniques to break you down, such a job can morph into something very similar to actual slavery.

If you dont know any better, its easy to fall into slavery conditions. Here are signs that your sense of freedom in life is totally gone:

Because of the way employers conveniently ignore yearly inflation, todays minimal wage is not enough to maintain any semblance of a normal lifestyle. Minimal wage makes some sense in small businesses just starting out. But, In America, $8.25 an hour, or less, from a large, billion-dollar corporation is inexcusable. In this case, your annual wages cost a second of the companys hourly profits. In other words, your hard work is a very bad deal for you, and a killer opportunity for the suits upstairs.

Youre lucky you even have a job! is a psychological taunt that bad employers use to try and keep their wage-slaves from believing they can do any better. Such statements are made to maintain a sense of control. Understand, voluntary slavery is not a rare phenomenon. It happens when a person is brainwashed into the belief that they have nowhere else they can go.

If your manager uses psychological put-downs like this to denigrate your professional abilitiesunderstand that its being done for a reason.

The idea of getting a raise and a promotion may be dangled in-front of you, but youve seen no evidence to suggest that it really happens. In fact, only a very small percentage of your co-workers ever obtain this goal, and they tend to be the cronies of upper-management. If this is the case, then what exactly is your reason for working at this company?

Inconvenient hours are inevitable in jobs, but some companies will abuse the system. This ranges from illegally denying overtime pay, to scheduling month-long bouts of cloping (working until closing hours late at night, then opening hours the next morning) that leaves the employee physically and emotionally drained.

An employee in this system may feel the intense pressure by the bosses to conform to abusive hours, under the threat of being denied promotions or even getting fired for seeking better treatment.

Americas two-week annual vacation time is one of the weakest in the Western world, and American workers tend to not even use it. This is because many employers will hint that vacationers are likely to end up on the shit-list of not getting promoted. They may even hint that unruly vacation-seekers will be the first to get laid-off or fired at the earliest opportunity.

A system of slavery does not allow free-time for individuals to maintain their own lives outside of their work. This could cause dissent and break the system of total control. An unspoken methodology among abusive managers is to destroy the lifestyles of employees so, instead of tending to family or hobbies, they work at full capacity.

Feeling motivated based on high-standards and being scared to go below those standards is one thing, but being genuinely scared of the people youre working for is another.

Slave-masters maintain systems of fear, to break down their subjects and perhapsin timebuild them back up. For the best example of thisplease see Theon Greyjoy in Game of Thrones.

Psychological and verbal abuse is usually what occurs. An abusive employer understands exactly what strings to pull to generate feelings of shame or guilt, and theyll use the professional context to destroy a subjects sense of self-worth, perhaps by implying worthlessness at the vocation theyve devoted their life to.

In other instances, the abuse is very overt and could include yelling, tantrums and even physical assaults. But the outcome is the same: the employee living in a constant state of paranoia, fear, and subservience.

Read carefully the ten warning-signs youre in a cult by the Cult Education Institute. Some of these that could be very applicable to a workplace include: absolute authoritarianism without meaningful accountability, no tolerance for questions or critical inquiry, the leader (boss) is always right, and former followers (employees) are vilified as evil for leaving.

If the job feels less about, you know, getting the job doneand is more about the influence, charisma and infallibility of the bossthen get the heck out of there. This means the person in charge is getting a side-benefit to running or managing the workplace: power and dominance.

The number one sign youre a slave and not an employee is that youre working an unpaid internship, and its not for college credit. You may be promised great benefits and valuable connections, at what amounts to harsh workplace conditions, long hours, and zero pay.

A huge mistake I see young professionals make, and it really irks me, is naivety about peoples intentions. I went to film school for my bachelors, and many students I knew lusted after top internships at film studios or with big names in the entertainment industry. Such internships are often offered regardless of college credit.

When a person is blindsided by their desire to make it and get in with big names, they are likely to make bad decisionsand unscrupulous employers will prey on this desire.

Internships are great IF its part of a students actual curriculum. It means hands-on work and real experience versus useless classrooms. But, the questionable non-credit internships I warn about also exist to lure young people into systems of slavery. Its gotten so bad these types of arrangements are quickly becoming illegal in California.

The reality of such internships is that the slave-drivers only desire one thing: unpaid work. There is NO promise that you will move up or land any type of a paid job. When your internship finishes, they will discard you and find the next victim.

The biggest reason to avoid internships is the mentality behind the deal. Imagine a law firm or a film studio that is a multi-billion dollar operation. How hard would it be to throw their new recruit at LEAST minimum wage? The fact such a company would, despite their huge profits, still desire unpaid labor is indicative of a slave-driving mentality that funnels wealth to the top at the expense of the people on the bottom making it possible.

As a professional, it would be best for you to avoid doing any type of business with any individual or company that possesses a philosophy like this.

Employment-slavery situations are common. Very common. But ultimately, the biggest factor in determining how bad it is, is a single question: are you happy?

If you are happy at $8.25 an hour with no benefits, because you like the people you work with, you like the nature of the work, and you feel its moving you somewhere you want to bethen its not slavery. Youre making an investment thatll either pay off, or it wontbut at least you enjoy what youre doing.

However, if you are miserable in your current conditions, its quite possible that the uneasy feeling in your gut is your intuition telling you that someone is taking advantage of you.

Employment is supposed to be a business contract, and an exchange of services. Never a system of control. Sometimes, just the willingness to walk away is your strongest defense against a terrible job situation.

For more about avoiding systems of employment-slavery, please see my short books: Freedom: How to Make Money From Your Dreams and Ambitions, and How to Quit Your Job: Escape Soul Crushing Work, Create the Life You Want, and Live Happy.

(For more books, also check out the Developed Life bookstore, http://www.developedlife.com/bookstore).

Related

Continued here:

8 Signs You’re a Slave Instead of an Employee

Progress Synonyms, Progress Antonyms | Thesaurus.com

If one were not a scientist one might be tempted to say there is no progress.

Prehistoric man, as I just told you, was on a fair way to progress.

From this point the progress will be best narrated by extracts from my Diary.

We talked of progress; but progress, like the philosopher’s stone, could not be easily attained.

From this strength we have contributed to the recovery and progress of the world.

Progress may be slowmeasured in inches and feet, not milesbut we will progress.

In no nation are the institutions of progress more advanced.

It was characterized as “a policy of which peace, progress and retrenchment were the watchwords.”

We do not dread, rather do we welcome, their progress in education and industry.

Start on this journey of progress and justice, and America will walk at your side.

Read more:

Progress Synonyms, Progress Antonyms | Thesaurus.com

Nanomedicinelab

2D Materials, 2018, 5: 035020

Journal of Controlled Release, 2018, 276: 157-167

ACS Nano, 2018, 12(2): 1373-1389

2D Materials, 2018, 5: 035014

Chem, 2018, 4(2): 334358

Nanoscale, 2018, 10:1180-1188

Advanced Healthcare Materials, 2017, 7 (4): 1700815

Science Robotics, 2017, 2, 12, eaaq1155

Nanoscale, 2018, 10, 12561264

In Vivo Reprogramming in Regenerative Medicine (Springer Publishing) 21st Nov. 2017, pp: 65-82

Read the original:

Nanomedicinelab

nanomedicine: nanotechnology for cancer treatment – YouTube

Solving radiotherapy s biggest limitation. Medicine is now using physics every day to treat cancer patients. Nanotechnologies or Nanomedicine can help clinicians deliver safer and more efficient treatments by shifting the intended effect from the macroscopic to the subcellular level. http://www.nanobiotix.comwww.laurentlevy.com

Read the rest here:

nanomedicine: nanotechnology for cancer treatment – YouTube

FreeSociety

What’s this project about?

For many decades, mostly libertarians have been trying to create a new country by various methods that have ranged from unsuccessfully claiming an existing piece of land (Minerva, Liberland), to creating floating structures on the water (Seasteading). Unfortunately, none of these attempts have succeeded so far and have encountered substantial resistance from existing governments or were technically or financially too difficult. Our conclusion is that, to really gain sovereignty, the most efficient way is to negotiate with an existing government. There are many examples of governments granting another nation sovereignty over a part of their territory, the more prominent example being Guantanamo bay (Cuba), which the USA leased as a coaling and naval station in 1903 for $2000 payable in gold per year. Other more benevolent examples are the current discussions between Maldives and other nations to sell them a piece of land in an attempt to have a solution for their people once their islands permanently disappear because of rising ocean levels.

We have started up preliminary talks with governments and interest is much higher than initially expected. For confidentiality reasons we are unable to disclose any names at this point, but we will do so as soon as we are allowed.

We plan to establish a rule of law based on libertarian principles and free markets. We dont see the need to recreate traditional government structures. The rule of law / constitution can be included in the final agreement of the land sale, and will be an extension of the existing contract that will be put in place with the government that granted us the sovereignty. Enforcement will happen through private arbitration, competing court systems and private law enforcement. It is important to establish a proper rule of law, as our project will set an example for the industry and create an important precedent with governments and the world. We want to make sure the constitution is solid but avoid the inefficiencies of existing government structures.

See the original post here:

FreeSociety

MPC60 Software – Roger Linn Design

Do you have an Akai MPC60 or MPC60-II with the original version 2.12 software? Our version 3.10 software update adds the software improvements of the MPC3000 to the MPC60 or MPC60-II. It comes on 4 chips that you can install yourself and adds lots of useful features.

Sampling No Longer Limited to 5 Seconds

The 5 second limit for new samples is gone, allowing you to sample individual sounds up to the limits of memory (13 or 26 seconds, depending on whether or not your MPC60 contains memory expansion.) And sequence memory is no longer erased before sampling.

Stereo Sampling

Version 3.10 won’t put a stereo sampling input on the back of your MPC60, but it does provide a method of creating stereo samples. Simply sample the left and right sides of a stereo sound separately as mono sounds, then a new screen in the software automatically re-syncs and combines them to form the stereo sound.

MIDI File Save and Load

Load standard PC-format MIDI file disks or save sequences as MIDI files. Move sequences between your MPC and PC or Mac sequencers.

Note: this requires that you download our Midi File Save utility, save it to an MPC60 floppy then boot your MPC60 from it.

Reads All MPC3000 Files Including Stereo Sounds

Reads all MPC3000 files, including mono or stereo sounds (saved to MPC60 floppies) and directly reads MPC3000 hard disks (MPC-SCSI required). Reads all MPC60 files.

Sound Compression Doubles Sound Memory

This features resamples existing sounds in memory from the normal 40 kHz to 20 kHz, thereby fitting into half the memory space. This works surprisingly well for most sounds (not so well for cymbals). In an expanded MPC60 (26 seconds) containing all compressed sounds, that’s equivalent to 52 seconds of sounds.

Voice Restart for “Sound Stuttering”

Sounds may be set to restart when a single pad is played repeatedly, for “sound stuttering” effects. Also, sounds may be set to stop when finger is removed from pad, and any sound may be programmed to stop any other sound (choked cymbal stops ringing cymbal).

8 Drum Sets in Memory

Hold up to 8 drum sets in memory at once, each with 64 pad assignments from a common bank of up to 128 sounds. When saved to disk, sounds in drum sets are now saved as individual sound files, eliminating redundant sound data on disk when saving sets that share sounds.

4 Pad Banks for 64 Pad Assignments

Doubles the number of sounds immediately playable.

Hihat Slider Doubles as Realtime Tuning

The Hihat Decay Slider may now be assigned to any pad and may alternatively affect tuning, decay or attack in real time, with all movements recorded into the sequence.

Cut and Paste Sample Editing

Any portion of a sound may be removed and inserted at any point within another sound with single sample accuracy.

Hard Disk Save and Load

If you own the Marion Systems MPC-SCSI Hard Disk Interface for the MPC60, hard disk save and load operations are now included and work with the Iomega Zip 100mB or 250 mB drives.

Step from Note to Note in Step Edit

In Step Edit, the REWIND [] keys may now be used to search to the previous or next event within a track, regardless of location. Also, you may now cut and paste events.

Streamlined MPC3000 Displays

Screen displays are improved and more intuitive, nearly identical to the MPC3000. For example, 4 letter pad names are replaced in screens by the full sound name.

New Sequence Edit Features

Most sequence editing functions now permit selection of specific drums to be edited. The new Shift Timing feature shifts track timing independent of timing correction. And the new Edit Note Number Assignment feature permits, for example, all snare notes in a track to be changed to rimshots or any other sound.

New Sound and Sequence Files in 3.10 Format

We’ve created a few sound and sequence files in the new version 3.10 format that you can download here.

Also Works on ASQ10 Sequencer

Version 3.10 can also be installed in the Akai ASQ10 Sequencer, adding the above features related to sequencing. (Details)

And More

Three-level sound stacking or velocity switch per pad. Simplified interfacing with external MIDI gear. MIDI Local Mode. Automatic “best sound start” removes dead space at start of new drum samples. 16 LEVELS provides 16 attack or decay levels.

Note: Due to low demand for this product, we are no longer printing user manuals so a user manual will not be included. However, you can download the user manual from the link at left.

Continue reading here:

MPC60 Software – Roger Linn Design

Hungama.com – Hindi Bollywood Songs

With a unique loyalty program, the Hungama rewards you for predefined action on our platform. Accumulated coins can be redeemed to, Hungama subscriptions. You can also login to Hungama Apps(Music & Movies) with your Hungama web credentials & redeem coins to download MP3/MP4 tracks.

You need to be a registered user to enjoy the benefits of Rewards Program.

Read the original post:

Hungama.com – Hindi Bollywood Songs

Artificial intelligence – Wikipedia

Intelligence demonstrated by machines

Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI as of 2017[update] include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go),[6] autonomous cars, intelligent routing in content delivery network and military simulations.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding (known as an “AI winter”),[9][10] followed by new approaches, success and renewed funding.[8][11] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[12] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[13] the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.[14][15][16] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[12]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field’s long-term goals.[17] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[18] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[19] Some people also consider AI to be a danger to humanity if it progresses unabatedly.[20] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[21]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[22][11]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[23] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[24] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[19]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[25] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered intelligent”.[26] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[28] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[29] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[31] (and by 1959 were reportedly playing better than the average human),[32] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[33] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[34] and laboratories had been established around the world.[35] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[7]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[9] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[37] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[10]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[22] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[38] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[41] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[42] as do intelligent personal assistants in smartphones.[43] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][44] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[45] who at the time continuously held the world No. 1 ranking for two years.[46][47] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[48] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[11] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[48] In a 2017 survey, one in five companies reported they had “incorporated AI in some offerings or processes”.[49][50]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[53]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibilities that are unlikely to be fruitful.[55] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[57]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the artificial neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms;[58] the best approach is often different depending on the problem.[60]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][63][64][65]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[68][69][70] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[71][72][73]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[13]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[74] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[75]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[55] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[76]

Knowledge representation[77] and knowledge engineering[78] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[79] situations, events, states and time;[80] causes and effects;[81] knowledge about knowledge (what we know about what other people know);[82] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[83] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[84] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[85] scene interpretation,[86] clinical decision support,[87] knowledge discovery (mining “interesting” and actionable inferences from large databases),[88] and other areas.[89]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[96] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[97]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[98] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[99]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[100]

Machine learning, a fundamental concept of AI research since the field’s inception,[101] is the study of computer algorithms that improve automatically through experience.[102][103]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[103] Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[104] In reinforcement learning[105] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[106] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[107] and machine translation.[108] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[109]

Machine perception[110] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[111] facial recognition, and object recognition.[112] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its “object model” to assess that fifty-meter pedestrians do not exist.[113]

AI is heavily used in robotics.[114] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[115] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient’s breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into “primitives” such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[117][118] Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.[119][120] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[121]

Moravec’s paradox can be extended to many forms of social intelligence.[123][124] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[125] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[129]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate humancomputer interaction.[130] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give naive users an unrealistic conception of how intelligent existing computer agents actually are.[131]

Historically, projects such as the Cyc knowledge base (1984) and the massive Japanese Fifth Generation Computer Systems initiative (19821992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable “narrow AI” applications (such as medical diagnosis or automobile navigation).[132] Many researchers predict that such “narrow AI” work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[17][133] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a “generalized artificial intelligence” that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[134][135][136] Besides transfer learning,[137] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to “slurp up” a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, “Master Algorithm” could lead to AGI. Finally, a few “emergent” approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[139][140]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[141] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[14]Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[15]

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[142] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI “good old fashioned AI” or “GOFAI”.[143] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.[144]Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[145][146]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[14] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[147] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[148]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[149] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[15] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[150]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[151] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[37] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[16] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[152] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[153][154]

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[157] Artificial neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[158]

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d] Compared with GOFAI, new “statistical learning” techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[38][159] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from Explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.

AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[168] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[169] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[170] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[115] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[171] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[172] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[173]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming.[174] Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[175][176]

Logic[177] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[178] and inductive logic programming is a method for learning.[179]

Several different forms of logic are used in AI research. Propositional logic[180] involves truth functions such as “or” and “not”. First-order logic[181] adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy set theory assigns a “degree of truth” (between 0 and 1) to vague statements such as “Alice is old” (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false. Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as “if you are close to the destination station and moving fast, increase the train’s brake pressure”; these vague rules can then be numerically refined within the system. Fuzzy logic fails to scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences.[e][183][184]

Default logics, non-monotonic logics and circumscription[91] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[79] situation calculus, event calculus and fluent calculus (for representing events and time);[80] causal calculus;[81] belief calculus;[185] and modal logics.[82]

Overall, qualitiative symbolic logic is brittle and scales poorly in the presence of noise or other uncertainty. Exceptions to rules are numerous, and it is difficult for logical systems to function in the presence of contradictory rules.[187]

Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[188]

Bayesian networks[189] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[190] learning (using the expectation-maximization algorithm),[f][192] planning (using decision networks)[193] and perception (using dynamic Bayesian networks).[194] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[194] Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. Complicated graphs with diamonds or other “loops” (undirected cycles) can require a sophisticated method such as Markov Chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on XBox Live to rate and match players; wins and losses are “evidence” of how good a player is. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[196] and information value theory.[97] These tools include models such as Markov decision processes,[197] dynamic decision networks,[194] game theory and mechanism design.[198]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[199]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[200] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[202]k-nearest neighbor algorithm,[g][204]kernel methods such as the support vector machine (SVM),[h][206]Gaussian mixture model,[207] and the extremely popular naive Bayes classifier.[i][209] Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, the dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as “naive Bayes” on most practical data sets.[210]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[k] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[213][214]

The study of non-learning artificial neural networks[202] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[215] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[216]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[217][218] and was introduced to neural networks by Paul Werbos.[219][220][221]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[222]

In short, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[223]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[224] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[225][226][224]

According to one overview,[227] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[228] and gained traction afterIgor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[229] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[230][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[231] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[233]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[234] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[235]Since 2011, fast implementations of CNNs on GPUs havewon many visual pattern recognition competitions.[224]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[236]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[237] which are in theory Turing complete[238] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[224] RNNs can be trained by gradient descent[239][240][241] but suffer from the vanishing gradient problem.[225][242] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[243]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[244] LSTM is often trained by Connectionist Temporal Classification (CTC).[245] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[246][247][248] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[249] Google also used LSTM to improve machine translation,[250] Language Modeling[251] and Multilingual Language Processing.[252] LSTM combined with CNNs also improved automatic image captioning[253] and a plethora of other applications.

AI, like electricity or the steam engine, is a general purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[254] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[255][256] Researcher Andrew Ng has suggested, as a “highly imperfect rule of thumb”, that “almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI.”[257] Moravec’s paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[121]

Games provide a well-publicized benchmark for assessing rates of progress. AlphaGo around 2016 brought the era of classical board-game benchmarks to a close. Games of imperfect knowledge provide new challenges to AI in the area of game theory.[258][259] E-sports such as StarCraft continue to provide additional public benchmarks.[260][261] There are many competitions and prizes, such as the Imagenet Challenge, to promote research in artificial intelligence. The main areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[citation needed]

The “imitation game” (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[262] A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

Proposed “universal intelligence” tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; unfortunately, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[264][265]

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[268] and targeting online advertisements.[269][270]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[271] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[272]

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[273] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[274] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[275]

According to CNN, a recent study by surgeons at the Children’s National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[276] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[277] but was declared a hero after successfully diagnosing a woman who was suffering from leukemia.[278]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[279]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[280]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[281] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[282]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[283] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[284]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high-risk situations. These situations could include a head-on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[285] The programing of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[286] In August 2001, robots beat humans in a simulated financial trading competition.[287] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[288]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[289] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[290][291]

Worldwide annual military spending on robotics rose from 5.1 billion USD in 2010 to 7.5 billion USD in 2015.[292][293] Military drones capable of autonomous action are widely considered a useful asset. In 2017, Vladimir Putin stated that “Whoever becomes the leader in (artificial intelligence) will become the ruler of the world”.[294][295] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[296]

For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced.[297]

A report by the Guardian newspaper in the UK in 2018 found that online gambling companies were using AI to predict the behavior of customers in order to target them with personalized promotions.[298] Developers of commercial AI platforms are also beginning to appeal more directly to casino operators, offering a range of existing and potential services to help them boost their profits and expand their customer base.[299]

Artificial Intelligence has inspired numerous creative applications including its usage to produce visual art. The exhibition “Thinking Machines: Art and Design in the Computer Age, 1959-1989” at MoMA [300] provides a good overview of the historical applications of AI for art, architecture, and design. Recent exhibitions showcasing the usage of AI to produce art include the Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the deepdream algorithm [301] and the exhibition “Unhuman: Art in the Age of AI,” which took place in Los Angeles and Frankfurt in the fall of 2017. [302][303] In the spring of 2018, the Association of Computing Machinery dedicated a special magazine issue to the subject of computers and art highlighting the role of machine learning in the arts.[304]

There are three philosophical questions related to AI:

Can a machine be intelligent? Can it “think”?

Follow this link:

Artificial intelligence – Wikipedia

Benefits & Risks of Artificial Intelligence – Future of Life …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

View original post here:

Benefits & Risks of Artificial Intelligence – Future of Life …

What is AI (artificial intelligence)? – Definition from …

AI (artificial intelligence) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI was coined by John McCarthy, an American computer scientist, in 1956 at The Dartmouth Conference where the discipline was born. Today, it is an umbrella term that encompasses everything from robotic process automation to actual robotics. It has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data businesses are now collecting. AI can perform tasks such as identifying patterns in the data more efficiently than humans, enabling businesses to gain more insight out of their data.

AI can be categorized in any number of ways, but here are two examples.

The first classifies AI systems as either weak AI or strong AI. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple’s Siri, are a form of weak AI.

Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities so that when presented with an unfamiliar task, it has enough intelligence to find a solution. The Turing Test, developed by mathematician Alan Turing in 1950, is a method used to determine if a computer can actually think like a human, although the method is controversial.

The second example is from Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University. He categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

Here is the original post:

What is AI (artificial intelligence)? – Definition from …

What is Artificial Intelligence (AI)? – Definition from …

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.

Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

Read the original post:

What is Artificial Intelligence (AI)? – Definition from …

Artificial Intelligence research at Microsoft

At Microsoft, researchers in artificial intelligence are harnessing the explosion of digital data and computational power with advanced algorithms to enable collaborative and natural interactions between people and machines that extend the human ability to sense, learn and understand. The research infuses computers, materials and systems with the ability to reason, communicate and perform with humanlike skill and agility.

Microsofts deep investments in the field are advancing the state of the art in machine intelligence and perception, enabling computers that understand what they see, communicate in natural language, answer complex questions and interact with their environment. In addition, the companys researchers are thought leaders on the ethics and societal impacts of intelligent technologies. The research, tools and services that result from this investment are woven into existing and new products and, at the same time, made open and accessible to the broader community in a bid to accelerate innovation, democratize AI and solve the worlds most pressing challenges.

View original post here:

Artificial Intelligence research at Microsoft

A.I. Artificial Intelligence (2001) – IMDb

Edit Storyline

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001, Wide Release

Gross USA: $78,616,689, 23 September 2001

Cumulative Worldwide Gross: $235,927,000

Runtime: 146 min

Aspect Ratio: 1.85 : 1

Visit link:

A.I. Artificial Intelligence (2001) – IMDb


...23456...102030...