Eugenics: Definition, Movement & Meaning – HISTORY – HISTORY

Contents

Eugenics is the practice or advocacy of improving the human species by selectively mating people with specific desirable hereditary traits. It aims to reduce human suffering by breeding out disease, disabilities and so-called undesirable characteristics from the human population. Early supporters of eugenics believed people inherited mental illness, criminal tendencies and even poverty, and that these conditions could be bred out of the gene pool.

Historically, eugenics encouraged people of so-called healthy, superior stock to reproduce and discouraged reproduction of the physically or mentally challengedor anyone who fell outside the social norm. Eugenics was popular in America during much of the first half of the twentieth century, yet it earned its negative association mainly from Adolf Hitler and his obsessive attempts to create an advanced Aryan race.

Modern eugenics, more often called human genetic engineering, has come a long wayscientifically and ethicallyand offers hope for treating many devastating genetic illnesses. Even so, it remains controversial.

Eugenics literally means good creation. The ancient Greek philosopher Plato may have been the first person to promote the idea, although the term eugenics didnt come on the scene until British scholar Sir Francis Galton coined it in 1883 in his book, Inquiries into Human Faculty and Its Development.

In one of Platos best-known literary works, The Republic, he wrote about creating a superior society by procreating high-class people together and discouraging coupling between the lower classes. He also suggested a variety of mating rules to help create an optimal society.

For instance, men should only have relations with a woman when arranged by their ruler, and incestuous relationships between parents and children were forbiddenbut not between brother and sister. While Platos ideas may be considered a form of ancient eugenics, he received little credit from Galton.

In the late 19th century, Galtonwhose cousin was Charles Darwinhoped to better humankind through the propagation of the British elite. His plan never really took hold in his own country, but in America it was more widely embraced.

Eugenics made its first official appearance in American history through marriage laws. In 1896, Connecticut made it illegal for people with epilepsy or who were feeble-minded to marry. In 1903, the American Breeders Association was created to study eugenics.

John Harvey Kellogg, of Kelloggs cereal fame, organized the Race Betterment Foundation in 1911 and established a pedigree registry. The foundation hosted national conferences on eugenics in 1914, 1915 and 1928.

As the concept of eugenics took hold, prominent citizens, scientists and socialists championed the cause and established the Eugenics Record Office. The office tracked families and their genetic traits, claiming most people considered unfit were immigrants, minorities or poor.

The Eugenics Record Office also maintained there was clear evidence that supposed negative family traits were caused by bad genes, not racism, economics or the social views of the time.

Eugenics in America took a dark turn in the early 20th century, led by California. From 1909 to 1979, around 20,000 sterilizations occurred in California state mental institutions under the guise of protecting society from the offspring of people with mental illness.

Many sterilizations were forced and performed on minorities. Thirty-three states would eventually allow involuntary sterilization on whomever lawmakers deemed unworthy to procreate.

In 1927, the U.S. Supreme Court ruled that forced sterilization of the handicapped does not violate the U.S. Constitution. In the words of Supreme Court Justice Oliver Wendell Holmes, three generations of imbeciles are enough. In 1942, the ruling was overturned, but not before thousands of people underwent the procedure.

Scroll to Continue

In the 1930s, the acting governor of Puerto Rico, Rafael Menendez Ramos, implemented sterilization programs for Puerto Rican women. Ramos claimed the action was needed to battle rampant poverty and economic strife; however, it may have also been a way to prevent the so-called superior Aryan gene pool from becoming tainted with Latino blood.

According to a 1976 Government Accountability Office investigation, between 25 and 50 percent of Native Americans were sterilized between 1970 and 1976. Its thought some sterilizations happened without consent during other surgical procedures such as an appendectomy.

In some cases, health care for living children was denied unless their mothers agreed to sterilization.

As horrific as forced sterilization in America was, nothing compared to Adolf Hitlers eugenic experiments before and during World War II. And Hitler didnt come up with the concept of a superior Aryan race all on his own. In fact, he referred to American eugenics in his 1934 book, Mein Kampf.

In Mein Kampf, Hitler declared non-Aryan races such as Jews and Romani as inferior. He believed Germans should do everything possible, including genocide, to make sure their gene pool stayed pure. And in 1933, the Nazis created the Law for the Prevention of Hereditarily Diseased Offspring, which resulted in thousands of forced sterilizations.

By 1940, Hitlers master-race mania took a terrible turn as hundreds of thousands of Germans with mental or physical disabilities were killed by gas or lethal injection.

During World War II, concentration camp prisoners endured horrific medical tests under the guise of helping Hitler create the perfect race. Josef Mengele, an SS doctor at Auschwitz, oversaw many experiments on both adult and child twins.

He used chemical eyedrops to try and create blue eyes, injected prisoners with devastating diseases and performed surgery without anesthesia. Many of his patients died or suffered permanent disability, and his gruesome experiments earned him the nickname Angel of Death.

In all, its estimated eleven million people died during the Holocaust, most of them because they didnt fit Hitlers definition of a superior race.

Thanks to the atrocities of Hitler and the Nazis, eugenics lost momentum in after World War II, although forced sterilizations still happened. But as medical technology advanced, a new form of eugenics came on the scene.

Modern eugenics, better known as human genetic engineering, changes or removes genes to prevent disease, cure disease or improve your body in some significant way. The potential health benefits of human gene therapy are impressive since many devastating or life-threatening illnesses could be cured.

But modern genetic engineering also comes with a potential cost: As technology advances, people could routinely weed-out what they consider undesirable traits in their offspring. Genetic testing already allows parents to identify some diseases in their child in utero, which may cause them to terminate the pregnancy.

This is controversial, since what exactly constitutes negative traits is open to interpretation, and many people feel that all humans have the right to be born regardless of disease, or that the laws of nature shouldnt be tampered with.

Much of Americas historical eugenics efforts such as forced sterilizations have gone unpunished, although some states offered reparations to victims or their survivors. For the most part, though, its a largely unknown stain on Americas history. And no amount of money can ever repair the devastation of Hitlers eugenics programs.

As scientists embark on a new eugenics frontier, past failings can serve as a warning to approach modern genetic research with care and compassion.

Controlling Heredity: American Breeders Association. University of Missouri.Forced Sterilization of Native Americans: Late Twentieth Century Physician Cooperation with National Eugenic Policies. The Center for Bioethics & Human Dignity.Greek Theories on Eugenics. Journal of Medical Ethics.Josef Mengele. Holocaust Encyclopedia.Latina Women: Forced Sterilization. University of Michigan.Modern Eugenics: Building a Better Person? Helix.Nazi Medical Experiments. Holocaust Encyclopedia.Plato. Stanford Encyclopedia of Philosophy.Unwanted Sterilization and Eugenics Programs in the United States. PBS.

Go here to read the rest:

Eugenics: Definition, Movement & Meaning - HISTORY - HISTORY

Post Definition & Meaning – Merriam-Webster

1

2

especially : a pole that marks the starting or finishing point of a horse race

3

: a football passing play in which the receiver runs downfield before turning towards the middle of the field

4

: the metal stem of a pierced earring

5

: a metallic fitting attached to an electrical device (such as a storage battery) for convenience in making connections

transitive verb

1

2

: to publish, announce, or advertise by or as if by use of a placard

: to denounce by public notice

: to enter on a public listing

: to forbid (property) to trespassers under penalty of legal prosecution by notices placed along the boundaries

3

: to publish (something, such as a message) in an online forum (such as an electronic message board)

1

: something (such as a message) that is published online

2

chiefly British

also : the mail handled

: a single dispatch of mail

3

archaic

: one of a series of stations for keeping horses for relays

transitive verb

1

3

: to transfer or carry from a book of original entry to a ledger

: to make transfer entries in

4

archaic : to dispatch in haste

intransitive verb

1

: to rise from the saddle and return to it in rhythm with a horse's trot

2

3

: to travel with post-horses

1

especially : a sentry's beat or station

: a station or task to which one is assigned

: a local subdivision of a veterans' organization

: one of two bugle calls sounded (as in the British army) at tattoo

2

: an office or position to which a person is appointed

also : the offensive position of a player occupying the post

3

: a trading station on the floor of a stock exchange

transitive verb

1

2

chiefly British : to assign to a unit, position, or location (as in the military or civil service)

1

2

Subscribe to America's largest dictionary and get thousands more definitions and advanced searchad free!

Read the original:

Post Definition & Meaning - Merriam-Webster

Offshore drilling – Wikipedia

Mechanical process where a wellbore is drilled below the seabed

Offshore drilling is a mechanical process where a wellbore is drilled below the seabed. It is typically carried out in order to explore for and subsequently extract petroleum that lies in rock formations beneath the seabed. Most commonly, the term is used to describe drilling activities on the continental shelf, though the term can also be applied to drilling in lakes, inshore waters and inland seas.

Offshore drilling presents environmental challenges, both offshore and onshore from the produced hydrocarbons and the materials used during the drilling operation. Controversies include the ongoing US offshore drilling debate.[1]

There are many different types of facilities from which offshore drilling operations take place. These include bottom founded drilling rigs (jackup barges and swamp barges), combined drilling and production facilities either bottom founded or floating platforms, and deepwater mobile offshore drilling units (MODU) including semi-submersibles or drillships. These are capable of operating in water depths up to 3,000 metres (9,800ft). In shallower waters the mobile units are anchored to the seabed, however in water deeper than 1,500 metres (4,900ft) the semi-submersibles and drillships are maintained at the required drilling location using dynamic positioning.

Around 1891, the first submerged oil wells were drilled from platforms built on piles in the fresh waters of the Grand Lake St. Marys in Ohio. The wells were developed by small local companies such as Bryson, Riley Oil, German-American and Banker's Oil.[2]

Around 1896, the first submerged oil wells in salt water were drilled in the portion of the Summerland field extending under the Santa Barbara Channel in California. The wells were drilled from piers extending from land out into the channel.[3][4]

Other notable early submerged drilling activities occurred on the Canadian side of Lake Erie in the 1900s and Caddo Lake in Louisiana in the 1910s. Shortly thereafter wells were drilled in tidal zones along the Texas and Louisiana gulf coast. The Goose Creek Oil Field near Baytown, Texas is one such example. In the 1920s drilling activities occurred from concrete platforms in Venezuela's Lake Maracaibo.[5]

One of the oldest subsea wells is the Bibi Eibat well, which came on stream in 1923 in Azerbaijan.[6][dubious discuss] The well was located on an artificial island in a shallow portion of the Caspian Sea. In the early 1930s, the Texas Company developed the first mobile steel barges for drilling in the brackish coastal areas of the Gulf of Mexico.

In 1937, Pure Oil and its partner Superior Oil used a fixed platform to develop a field 1 mile (1.6km) offshore of Calcasieu Parish, Louisiana in 14 feet (4.3m) of water.

In 1938, Humble Oil built a mile-long wooden trestle with railway tracks into the sea at McFadden Beach on the Gulf of Mexico, placing a derrick at its end - this was later destroyed by a hurricane.[7]

In 1945, concern for American control of its offshore oil reserves caused President Harry Truman to issue an Executive Order unilaterally extending American territory to the edge of its continental shelf, an act that effectively ended the 3-mile limit "freedom of the seas" regime.[8]

In 1946, Magnolia drilled at a site 18 miles (29km) off the coast, erecting a platform in 18 feet (5.5m) of water off St. Mary Parish, Louisiana.[9]

In early 1947, Superior Oil erected a drilling and production platform in 20 feet (6.1m) of water some 18 miles (29km) off Vermilion Parish, La. But it was Kerr-Magee, as operator for partners Phillips Petroleum and Stanolind Oil & Gas that completed its historic Ship Shoal Block 32 well in October 1947, months before Superior actually drilled a discovery from their Vermilion platform farther offshore. In any case, that made Kerr-McGee's well the first oil discovery drilled out of sight of land.[10]

When offshore drilling moved into deeper waters of up to 30 metres (98ft), fixed platform rigs were built, until demands for drilling equipment was needed in the 100 feet (30m) to 120 metres (390ft) depth of the Gulf of Mexico, the first jack-up rigs began appearing from specialized offshore drilling contractors.[11]

The first semi-submersible resulted from an unexpected observation in 1961.[12] Blue Water Drilling Company owned and operated the four-column submersible Blue Water Rig No.1 in the Gulf of Mexico for Shell Oil Company. As the pontoons were not sufficiently buoyant to support the weight of the rig and its consumables, it was towed between locations at a draught midway between the top of the pontoons and the underside of the deck. It was noticed that the motions at this draught were very small, and Blue Water Drilling and Shell jointly decided to try operating the rig in the floating mode. The concept of an anchored, stable floating deep-sea platform had been designed and tested back in the 1920s by Edward Robert Armstrong for the purpose of operating aircraft with an invention known as the 'seadrome'. The first purpose-built drilling semi-submersible Ocean Driller was launched in 1963 by ODECO.. Since then, many semi-submersibles have been purpose-designed for the drilling industry mobile offshore fleet.

The first offshore drillship was the CUSS 1 developed for the Mohole project to drill into the Earth's crust.[13]

As of June 2010, there were over 620 mobile offshore drilling rigs (jackups, semisubs, drillships, barges, etc.) available for service in the worldwide offshore rig fleet.[14]

One of the world's deepest hubs is currently the Perdido in the Gulf of Mexico, floating in 2,438 meters (7,999ft) of water. It is operated by Royal Dutch Shell and was built at a cost of $3 billion.[15] The deepest operational platform is the Petrobras America Cascade FPSO in the Walker Ridge 249 field in 2,600 meters (8,500ft) of water.[16]

Notable offshore fields include:

Offshore oil and gas production is more challenging than land-based installations due to the remote and harsher environment. Much of the innovation in the offshore petroleum sector concerns overcoming these challenges, including the need to provide very large production facilities. Production and drilling facilities may be very large and a large investment, such as the Troll A platform standing on a depth of 300 meters (980ft).[20]

Another type of offshore platform may float with a mooring system to maintain it on location. While a floating system may be lower cost in deeper waters than a fixed platform, the dynamic nature of the platforms introduces many challenges for the drilling and production facilities.

The ocean can add several thousand meters or more to the fluid column. The addition increases the equivalent circulating density and downhole pressures in drilling wells, as well as the energy needed to lift produced fluids for separation on the platform.

The trend today is to conduct more of the production operations subsea, by separating water from oil and re-injecting it rather than pumping it up to a platform, or by flowing to onshore, with no installations visible above the sea. Subsea installations help to exploit resources at progressively deeper waterslocations which had been inaccessibleand overcome challenges posed by sea ice such as in the Barents Sea. One such challenge in shallower environments is seabed gouging by drifting ice features (means of protecting offshore installations against ice action includes burial in the seabed).

Offshore manned facilities also present logistics and human resources challenges. An offshore oil platform is a small community in itself with cafeteria, sleeping quarters, management and other support functions. In the North Sea, staff members are transported by helicopter for a two-week shift. They usually receive higher salary than onshore workers do. Supplies and waste are transported by ship, and the supply deliveries need to be carefully planned because storage space on the platform is limited. Today, much effort goes into relocating as many of the personnel as possible onshore, where management and technical experts are in touch with the platform by video conferencing. An onshore job is also more attractive for the aging workforce in the petroleum industry, at least in the western world. These efforts among others are contained in the established term integrated operations. The increased use of subsea facilities helps achieve the objective of keeping more workers onshore. Subsea facilities are also easier to expand, with new separators or different modules for different oil types, and are not limited by the fixed floor space of an above-water installation.

Offshore oil production involves environmental risks, most notably oil spills from oil tankers or pipelines transporting oil from the platform to onshore facilities, and from leaks and accidents on the platform (e.g. Deepwater Horizon oil spill and Ixtoc I oil spill).[21] Produced water is also generated, which is water brought to the surface along with the oil and gas; it is usually highly saline and may include dissolved or unseparated hydrocarbons.

Read more from the original source:

Offshore drilling - Wikipedia

W&T OFFSHORE INC : Entry into a Material Definitive Agreement, Termination of a Material Definitive Agreement, Creation of a Direct Financial…

W&T OFFSHORE INC : Entry into a Material Definitive Agreement, Termination of a Material Definitive Agreement, Creation of a Direct Financial Obligation or an Obligation under an Off-Balance Sheet Arrangement of a Registrant, Regulation FD Disclosure (f  Marketscreener.com

Read more:

W&T OFFSHORE INC : Entry into a Material Definitive Agreement, Termination of a Material Definitive Agreement, Creation of a Direct Financial...

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

Original post:

Artificial Intelligence What it is and why it matters | SAS

Pinterest uses AI and your camera to recommend pins – Engadget

But the idea of Lens doesn't stop at shopping. For example, that picture of a table could list to a bunch of room decor ideas. Or you can take a photo of a pomegranate, for example, and it'll spit out recipes that uses pomegranate as a main ingredient. A picture of a sweater could lead to different styles of it and how to wear it. Basically think of Lens as a way to search for something when you just don't have the words to describe what it is you're looking at.

Of course, the technology is imperfect. Not all of us take crystal clear photos on our phones, and blurry and awkward shots will probably churn out the wrong results. That's why Pinterest says Lens is still in beta, and is considered somewhat experimental technology.

Pinterest launched a couple of other visual discovery features today as well. One is called Shop The Look, which uses object recognition to automatically detect and search for items in a photo. So a picture of a living room might prompt Pinterest to bring up a list of Buyable pins for the couch, the lamp, the table and the rug. The pins won't be for that brand of furniture specifically of course, but just items that look very similar.

Pinterest says that Shop The Look will also give you styling and decor ideas too. So far, the company has partnered with folks like Curalate, Olapic, Project September, Refinery 29 and ShopStyle to curate the looks. Brands and retailers that are on board include CB2, Macy's, Target, Neiman Marcus and Wayfair.

Last but not least, Pinterest also rolled out Instant Ideas, which is represented by a tiny circle at the bottom right of a pin. Tap it and you'll see a list of related ideas. The more you tap the pins you're interested in, the more customized your recommendations will be over time.

All of these features is live on Android and iOS starting today. It's available in the US for now, with more countries to be announced at a later date.

Read more:

Pinterest uses AI and your camera to recommend pins - Engadget

New Study Attempts to Improve Hate Speech Detection Algorithms – Unite.AI

Social media companies, especially Twitter, have long faced criticism for how they flag speech and decide which accounts to ban. The underlying problem almost always has to do with the algorithms that they use to monitor online posts. Artificial intelligence systems are far from perfect when it comes to this task, but there is work constantly being done to improve them.

Included in that work is a new study coming out of the University of Southern California that attempts to reduce certain errors that could result in racial bias.

One of the issues that doesnt receive as much attention has to do with algorithms that are meant to stop the spread of hateful speech but actually amplify racial bias. This happens when the algorithms fail to recognize context and end up flagging or blocking tweets from minority groups.

The biggest problem with the algorithms in regard to context is that they are oversensitive to certain group-identifying terms like black, gay, and transgender. The algorithms consider these hate speech classifiers, but they are often used by members of those groups and the setting is important.

In an attempt to resolve this issue of context blindness, the researchers created a more context-sensitive hate speech classifier. The new algorithm is less likely to mislabel a post as hate speech.

The researchers developed the new algorithms with two new factors in mind: the context in regard to the group identifiers, and whether there are also other features of hate speech present in the post, like dehumanizing language.

Brendan Kennedy is a computer science Ph.D. student and co-lead author of the study, which was published on July 6 at ACL 2020.

We want to move hate speech detection closer to being ready for real-world application, said Kennedy.

Hate speech detection models often break, or generate bad predictions, when introduced to real-world data, such as social media or other online text data, because they are biased by the data on which they are trained to associate the appearance of social identifying terms with hate speech.

The reason the algorithms are oftentimes inaccurate is that they are trained on imbalanced datasets with extremely high rates of hate speech. Because of this, the algorithms fail to learn how to handle what social media actually looks like in the real world.

Professor Xiang is an expert in natural language processing.

It is key for models to not ignore identifiers, but to match them with the right context, said Ren.

If you teach a model from an imbalanced dataset, the model starts picking up weird patterns and blocking users inappropriately.

To test the algorithm, the researchers used a random sample of text from two social media sites that have a high-rate of hate speech. The text was first hand-flagged by humans as prejudiced or dehumanizing. The state-of-the-art model was then measured against the researchers own model for inappropriately flagging non-hate speech, through the use of 12,500 New York Times articles with no hate speech present. While the state-of-the-art models were able to achieve 77% accuracy in identifying hate vs non-hate, the researchers model was higher at 90%.

This work by itself does not make hate speech detection perfect, that is a huge project that many are working on, but it makes incremental progress, said Kennedy.

In addition to preventing social media posts by members of protected groups from being inappropriately censored, we hope our work will help ensure that hate speech detection does not do unnecessary harm by reinforcing spurious associations of prejudice and dehumanization with social groups.

See the original post here:

New Study Attempts to Improve Hate Speech Detection Algorithms - Unite.AI

This AI tool helps healthcare workers look after their mental health – The European Sting

Credit: Unsplash

This article is brought to you thanks to the collaboration ofThe European Stingwith theWorld Economic Forum.

Author: Francis Lee, Psychiatrist-in-Chief, New-York- Presbyterian Hospital, Conor Liston, Director, Sackler Institute, Weill Cornell Medicine & Laura L. Forese, Executive Vice President & Chief Operating Officer, New-York-Presbyterian Hospital

As the COVID-19 pandemic continues to exert pressure on global healthcare systems, frontline healthcare workers remain vulnerable to developing significant psychiatric symptoms. These effects have the potential to further cripple the healthcare workforce at a time when a second wave of the coronavirus is considered likely in the fall, and workforce shortages already pose a serious challenge.

Studies show that healthcare workers are also less likely to proactively seek mental health services due to concerns about confidentiality, privacy and barriers to accessing care. Thus, there is an obvious and pressing need for scalable tools to act as an early warning system to alert healthcare workers when they are at risk of depression, anxiety or trauma symptoms and then rapidly connect them with the help they need. To address the mental health needs of the 47,000 employees and affiliated physicians in our hospital system, New York-Presbyterian (NYP) has developed an artificial intelligence (AI)-enabled digital tool that screens for symptoms, provides instant feedback, and connects participants with crisis counselling and treatment referrals.

Called START (Symptom Tracker And Resources for Treatment), this screening tool enables healthcare workers to confidentially and anonymously track changes in their mental health status. This tool is unique in that it not only provides immediate feedback to participants on the severity of their symptoms but also connects them to existing mental healthcare resources. Participants are asked every two weeks to complete a short battery of questions that assess symptoms of depression, anxiety, trauma and perceived stress, as well as potential risk factors for poor mental health and ability to function at work.

To maximise engagement, the psychiatric symptom questions in the START platform are drawn from widely validated psychiatric screening tools and adaptively selected using AI algorithms that capture the most relevant clinical symptom data in a time-efficient manner. This is achieved in two ways. First, the START platform automatically selects the most informative questions based on a participants previous responses in a minimum amount of time (around five to seven minutes). Second, it focuses on questions that are reliably correlated with particular functional connectivity patterns in depression-related brain networks. Much like our national airport network, brain networks are organised into a system of hubs that facilitate efficient information flow, just as hub airports like OHare and JFK connect passengers with smaller regional destinations. Disrupted connections between brain hubs may contribute to specific symptoms and behaviours in depression.

For example, in previous work (see figure below), our group has found that psychiatric symptoms like anhedonia (a loss of interest in pleasurable activities) are reliably correlated with functional magnetic resonance imaging (fMRI) measures of connectivity in reward-related brain regions, whereas symptoms like anxiety and insomnia are correlated with differing connectivity alterations in other brain areas.

At the end of the survey, participants receive feedback on their results and are provided with options for connecting with existing and accessible mental healthcare resources. For those who need psychiatric care for their symptoms, we integrated the START platform with a telemedicine urgent counselling service at NYP that is available seven days a week and which provides faculty and staff across NYP hospitals with quick and free access to confidential and supportive virtual counselling by trained mental health professionals a special feature of this tool and our COVID-19 response. This is important, because if treatment resources are not made immediately available and easily accessible to our healthcare workers, they may be less likely to seek help when they need it.

Within one week of deploying the symptom tracker, the utilization of our urgent counselling services had more than doubled, resulting in numerous referrals to mental health professionals. Another key element contributing to the increase in utilization was frequent communication from NYP leadership about the Symptom Tracker and the availability of crisis support. In the near future, a mobile cognitive behavioral therapy (CBT) app developed at NYP (by a group led Francis Lee) will be linked to START to target specific mood, anxiety, and trauma symptom profiles, and is currently being tested in a clinical trial for safety and efficacy.

Ultimately, we hope that such emerging digital tools will transform mental health services not only for our healthcare workers but also for larger populations affected by the pandemic.

Go here to see the original:

This AI tool helps healthcare workers look after their mental health - The European Sting

An AI Can Now Predict How Much Longer You’ll Live For – Futurism

In Brief Researchers at the University of Adelaide have developed an AI that can analyze CT scans to predict if a patient will die within five years with 69 percent accuracy. This system could eventually be used to save lives by providing doctors with a way to detect illnesses sooner. Predicting the Future

While many researchers are looking for ways to use artificial intelligence (AI) to extend human life, scientists at the University of Adelaidecreated an AI that could help them better understand death. The system they created predicts ifa person will die within five years after analyzingCT scans of their organs, and it was able to do sowith 69 percent accuracy a rate comparable to that of trained medical professionals.

The system makes use of thetechnique of deep learning, and it was tested using images taken from 48 patients, all over the age of 60. Its the first study to combine medical imaging and artificial intelligence, and the results have been published in Scientific Reports.

Instead of focusing on diagnosing diseases, the automated systems can predict medical outcomes in a way that doctors are not trained to do, by incorporating large volumes of data and detecting subtle patterns, explained lead authorLuke Oakden-Rayner in a university press release. This method of analysis can explore the combination of genetic and environmental risks better than genome testing alone,according to the researchers.

While the findings are only preliminary given the small sample size, the next stage will apply the AI to tens of thousands of cases.

While this study does focus on death, the most obvious and exciting consequence of it is how it could help preserve life. Our research opens new avenues for the application of artificial intelligence technology in medical image analysis, and could offer new hope for the early detection of serious illness, requiring specific medical interventions, said Oakden-Rayner. Because it encourages more precise treatment using firmer foundational data, the system has the potential to save many lives and provide patients with less intrusive healthcare.

An added benefit of this AI is its wide array of potential uses. Because medical imaging of internal organs is a fairly routine part of modern healthcare, the data is already plentiful. The system could be used to predict medical outcomes beyond just death, such as the potential for treatment complications, and it could work with any number of images, such as MRIs or X-rays, not just CT scans. Researchers will just need to adjustthe AItotheir specifications, andtheyll be able to obtain predictions quickly and cheaply.

AIsystems are becoming more and more prevalentin the healthcare industry.Deepmind is being usedto fight blindness in the United Kingdom, and IBM Watson is already as competent as human doctors at detecting cancer. It is in medicine, perhaps more than any other field, that we see AIs huge potential to help the human race.

Read more from the original source:

An AI Can Now Predict How Much Longer You'll Live For - Futurism

Best Competency With Artificial Intelligence is by Having Intelligent Experience – ReadWrite

AI is changing the way customers interact with businesses. AI changes everything with how websites and bots will work along with many other tools and integrated systems. Businesses protect and manage digital assets and data of the company. There is a day-to-day struggle in businesses currently using artificial intelligence, which is made more difficult because of sequential technologies.

Many businesses are intrigued by the idea of turning to artificial intelligence for help in the sales process. AI is certainly capable of finding your best-qualified sales leads. AI can give you efficient issue resolution, and systems that feed actual data back in for future process and product improvements. However, most enterprises do not know where or how to get started with their new company AI.

Systems and data must connect to allow full use of capabilities as if all information were native to each. And also, edgeways to present information to end-users, though data is evolving on a constant basis. The environment requires specialized insight and know-how to ensure a smooth and continuous integration thats both relevant and current.

The intelligent experience is all about leveraging AI to derive predictive insights that can be embedded in the workflow. Companies seeking competitive advantage must find ways to make their business operations more intelligent.

AI functionality is poised to be a game-changer, exploring possibilities and opening up new roles and more business-central activities. However, its important to first understand how intelligent experience can help improve? It starts with a shift in focus.

Artificial intelligence is edging into business processes across organizations, however, when an organization interacts with the use of AI correctly, that shouldnt be a sign AI is running the experience behind the scenes.

AI has the power to make customers feel they are making their choices, but its the machine learning and the algorithms that are handling those decisions.

The most useful sense, when it comes to shifting in focus, is vision keeping track of the ability to give suggestions on how to improve.

Artificial Intelligence is going beyond the senses and going straight to the source the brain. The very reactive tactic, oftentimes, companies are late, identifying customers likely when its too late. This is because there is a major difference between predicting significant changes in the economy and a financial sign that becomes apparent only after a large shift has taken place.

Artificial Intelligence aims to heavily impact a number of industries worldwide shaping online customer experience models. The AI technology will take hold across many industries over the coming decade, and businesses firmly need to decide how AI will help them to optimize conversions.

Automating most internal processes, the operational effort involved in maintaining and controlling devices is reduced. However, simultaneously shifting focus, the marketplace, significantly allows configuration.

More cost-efficiency is rising from artificial intelligence, so customers can focus on increasing the quality and operations of their processes with just an increase in resources.

It is crucial to assess the landscape of the acquisition time period. This often is where perceptive relations start to form. Customers are going to be comparing their initial experience to the expectations entrepreneurs set during the sales process.

Processes of Artificial Intelligence are making significant progress in reducing several walks of life problems. It also provides automation of not-get interpretation and grasping, restructure the information.

With AI, as per the market, you can spur on processes, get value from data, and provide clients with a better experience. All those benefits can help drive sales and boost revenue.

The application of the AI system may now be defined in considerable detail. As of a rule, the cost of Artificial Intelligence requires intelligence on the work being done for proactive development. The development work is usually split into several feasibility studies and set business and project objectives.

However, if Artificial intelligence claims to be a plug-and-play canned legacy, you need to be highly suspicious. You need to have someone trained to take care of this system. (source: coseer.com.)

The sufficient algorithm performance is a key cost-effective factor, as often a high-quality algorithm requires a round of tuning sessions. To decide between various algorithmic approaches towards businesses, one needs to understand how exactly inculcation takes place under the hood, and what can be done to obtain competency.

If it is not clear up-front, one may end up in a situation of not-more-performing. AI is certainly exciting, but business owners cannot jump into it without first laying the foundation with basic analytics.

With so many possibilities for applying AI across an organization, in all likelihood, deploying an AI system must be effective. AI is often considered solely from a technology perspective and little wonder since its capabilities rely onand continually improve throughtechnical innovations.

Deploy with quick-witted positioned skills and a variety of tools to create AI algorithms that can be inserted into enterprise applications. Quick wins bring an added bonus. Meaning that getting the most out of AI is about validating AIs ability to spark value, keeping momentum and funding, and going for longer-term projects.

AI doesnt thrive in a vacuum. Businesses that generate value from AI deal with it as a major business transformation initiative that requires non-similar parts of the company to come together and work with probable expectations. AI is the future of business operations.

When contemplating an investment in AI, be sure you have pragmatic predictions and have a setup that will allow you to embed insights into the daily workflow of your organization. Through the power of AI, you can start blurring the lines between sales, service, and marketing.

The power of artificial intelligence needs a hard edge at business processes and the majority of resources. From there, your company can use AI in a way that actually helps your business grow and ultimately boost your bottom line.

Image Source: Pexels

Adedeji Omotayo is a Digital marketer, PR expert, content writer; the CEO, founder, and president of EcoWebMedia, a full-service digital marketing company. Adedeji is passionate about technology, marketing, and at the same time work with both small and big companies on their internet marketing strategies.

The rest is here:

Best Competency With Artificial Intelligence is by Having Intelligent Experience - ReadWrite

How AI is revolutionizing healthcare – Nurse.com

AI applications in healthcare can literally change patients lives, improving diagnostics and treatment and helping patients and healthcare providers make informed decisions quickly.

AI in the global healthcare market (the total value of products and services sold) was valued at $2.4 billion in 2019 and is projected to reach $31.02 billion in 2025.

Now in the COVID-19 pandemic, AI is being leveraged to identify virus-related misinformation on social media and remove it. AI is also helping scientists expedite vaccine development, track the virusand understand individual and population risk, among other applications.

Companies such as Microsoft, which recently stated it will dedicate $20 million to advance the use of artificial intelligence in COVID-19 research, recognize the need for and extraordinary potential of AI in healthcare.

The ultimate goal of AI in healthcare is to improve patient outcomes by revolutionizing treatment techniques. By analyzing complex medical data and drawing conclusions without direct human input, AI technology can help researchers make new discoveries.

Various subtypes of AI are used in healthcare. Natural language processing algorithms give machines the ability to understand and interpret human language. Machine learning algorithms teach computers to find patterns and make predictions based on massive amounts of complex data.

AI is already playing a huge role in healthcare, and its potential future applications are game-changing. Weve outlined four distinct ways that AI is transforming the healthcare industry.

This transformative technology has the ability to improve diagnostics, advance treatment options, boost patient adherence and engagement, and support administrative and operational efficiency.

AI can help healthcare professionals diagnose patients by analyzing symptoms, suggesting personalized treatments and predicting risk. It can also detect abnormal results.

Analyzing symptoms, suggesting personalized treatments and predicting risk

Many healthcare providers and organizations are already using intelligent symptom checkers. This machine learning technology asks patients a series of questions about their symptoms and, based on their answers, informs them of appropriate next steps for seeking care.

Buoy Health offers a web-based, AI-powered health assistant that healthcare organizations are using to triage patients who have symptoms of COVID-19. It offers personalized information and recommendations based on the latest guidance from the Centers for Disease Control and Prevention.

Additionally, AI can take precision medicine healthcare tailored to the individual to the next level by synthesizing information and drawing conclusions, allowing for more informed and personalized treatment. Deep learning models have the ability to analyze massive amounts of data, including information about a patients genetic content, other molecular/cellular analysis and lifestyle factors and find relevant research that can help doctors select treatments.

AI can also be used to develop algorithms that make individual and population health risk predictions in order to help improve outcomes. At the University of Pennsylvania, doctors used a machine learning algorithm that can monitor hundreds of key variables in real time to anticipate sepsis or septic shock in patients 12 hours before onset.

Detecting disease

Imaging tools can advance the diagnostic process for clinicians. The San Francisco-based company Enlitic develops deep learning medical tools to improve radiology diagnoses by analyzing medical data. These tools allow clinicians to better understand and define the aggressiveness of cancers. In some cases, these tools can replace the need for tissue samples with virtual biopsies, which would aid clinicians in identifying the phenotypes and genetic properties of tumors.

These imaging tools have also been shown to make more accurate conclusions than clinicians. A 2017 study published in JAMA found that of 32 deep learning algorithms, seven were able to diagnose lymph node metastases in women with breast cancer more accurately than a panel of 11 pathologists.

Smartphones and other portable devices may also become powerful diagnostic tools that could benefit the areas of dermatology and ophthalmology. The use of AI in dermatology focuses on analyzing and classifying images and the ability to differentiate between benign and malignant skin lesions.

Using smartphones to collect and share images could widen the capabilities of telehealth. In ophthalmology, the medical device company Remidio has been able to detect diabetic retinopathy using a smartphone-based fundus camera, a low-power microscope with an attached camera.

AI is becoming a valuable tool for treating patients. Brain-computer interfaces could help restore the ability to speak and move in patients who have lost these abilities. This technology could also improve the quality of life for patients with ALS, strokes, or spinal cord injuries.

There is potential for machine learning algorithms to advance the use of immunotherapy, to which currently only 20% of patients respond. New technology may be able to determine new options for targeting therapies to an individuals unique genetic makeup. Companies like BioXcel Therapeutics are working to develop new therapies using AI and machine learning.

Additionally, clinical decision support systems can help assist healthcare professionals make better decisions by analyzing past, current and new patient data. IBM offers clinical support tools to help healthcare providers make more informed and evidence-based decisions.

Finally, AI has the potential to expedite drug development by reducing the time and cost for discovery. AI supports data-driven decision making, helping researchers understand what compounds should be further explored.

Wearables and personalized medical devices, such as smartwatches and activity trackers, can help patients and clinicians monitor health. They can also contribute to research on population health factors by collecting and analyzing data about individuals.

These devices can also be useful in helping patients adhere to treatment recommendations. Patient adherence to treatment plans can be a factor in determining outcome. When patients are noncompliant and fail to adjust their behaviors or take prescribed drugs as recommended, the care plan can fail.

The ability of AI to personalize treatment could help patients stay more involved and engaged in their care. AI tools can be used to send patients alerts or content intended to provoke action. Companies like Livongo are working to give users personalized health nudges through notifications that promote decisions supporting both mental and physical health.

AI can be used to create a patient self-service model an online portal accessible by portable devices that is more convenient and offers more choice. A self-service model helps providers reduce costs and helps consumers access the care they need in an efficient way.

AI can improve administrative and operational workflow in the healthcare system by automating some of the process. Recording notes and reviewing medical records in electronic health records takes up 34% to 55% of physicians time, making it one of the leading causes of lost productivity for physicians.

Clinical documentation tools that use natural language processing can help reduce the time providers spend on documentation time for clinicians and give them more time to focus on delivering top-quality care.

Health insurance companies can also benefit from AI technology. The current process of evaluating claims is quite time-consuming, since 80% of healthcare claims are flagged by insurers as incorrect or fraudulent. Natural language processing tools can help insurers detect issues in seconds, rather than days or months.

More here:

How AI is revolutionizing healthcare - Nurse.com

VergeSense’s AI sensing hardware tackles facility management – TechCrunch

Facility management might not sound like the sexiest use of AI technology. But office space can be a huge expense for larger businesses the biggest after staff costs which is why Y Combinator-backed startup, VergeSense, says its settled on facility management as the initial target for an AI-powered sensing device its been developing since joining the incubator program in May.

Their sensor as a system platform, as they dub it, consists of sensing devices containing a series of different sensor hardware, including an image sensor, coupled with a cloud platform for pre-training machine learning models that run on the hardware, process data and report occupancy analysis back to VergeSenses cloud.

Were using really inexpensive hardware weve crammed a bunch of different sensors inside. The core of the product is actually built around computer vision, so weve got a really inexpensive image sensor thats embedded inside, VergeSense co-founder Dan Ryan tells TechCrunch. The whole concept around what were doing is were using machine learning in pre-trained AI modules to do all of processing on the device itself.

Were not streaming a bunch of raw video data back to a cloud service weve pre-trained our models to run on the device themselves, he adds.

These AI modules can be trained to meet the particular tracking requirement of a customer before being loaded onto the sensor hardware that is sited in the customers space. This means processing is done locally, on the device, and only detection results are sent to the cloud where VergeSense customers are able to log in to view the analytics pertaining to their building.

Overall the sales pitch to customers is a system that can passively track how an office space is being used, providing visibility into dynamic multi-occupant, even multi-tenant environments, and making suggestions on how to reallocate resources to make best use of a space.

Maybe youve got an office thats segmented between a bunch of open office spaces and youve got a bunch of conference rooms, but your conference rooms are actually way over-utilized, theyre full all the time, we could inform that building owner that those rooms are being over-utilized and that they need to double down on a room, explains Ryan.

Or, in the opposite use-case, we could say youve got a conference room thats designed for 16 people but at max we only get two people using the room We can make that data available to them and they could split that space into two spaces.

It sounds like kind of a boring problem, but especially in the Bay Area, the price of real estate being $60/ft a year, on average, if youve got a 300sq ft conference room space thats an $18,000 a year asset, right. Just in that one room. So theres actually huge savings and efficiencies you can start gleaning by making all that data available to the end-users, he adds.

As well as an image sensor, the hardware contains a PIR (infrared) motion sensor, audio and RF capability (wi-fi and Bluetooth).

Typically an office space would need one sensor per 1,000sq ft, according to Ryan, although he says this can vary depending on factors such as ceiling height.

At this stage the team has a few early phase deployments of their system across some Fortune 500 companies in the Bay Area.

The startups first focus is commercial office buildings. And the first application its offering is people counting, to power occupancy analytics, though Ryan says the tech could also be used to track lots of other things for example, specific equipment like photocopiers, or even to hone in on something as specific as desk occupancy or to track usage of specific devices.

He also envisages utility in other verticals in future such as tracking people and equipment in hospitals or retail environments, for example.

The sensors can either be wired in or battery powered. They can also run on different networks, depending on whether the customer wants them on their corporate network (or indeed on a dedicated IoT network).

Ryan says VergeSense also offers a gateway device that can backhaul over a 2G cellular connection. Were not sending a lot of data. Its a little like text messages, what you can think of in terms of the data were sending back just people counts, he adds.

From a privacy point of view, as well as local processing, he says all the tracking is anonymous, so VergeSense is not tying analytics to individual identities or otherwise harvesting individual identities. Were not getting any personally identifiable information about anybody, he says. Its all anonymous counts i.e. I saw a person or I detected an object here or there.

Though he also suggests businesses deploying sensing technology within a multi-occupant environment where such tech at least runs the risk of being viewed suspiciously are best being upfront and honest with the employees at the facility about what the datas being used for and how its being leveraged.

The core challenge that people are trying to solve with these technologies is managing the space and actually making the employee experience more fulfilling and less frustrating, he argues. Theres not really the big brother aspect to the technologies its more about how do we just make that data make this workspace more efficient overall.

And while there are other potential solutions for tracking occupancy and equipment, for example motion sensors or RFID or Bluetooth tags on individual items, Ryan says VergeSenses advantage is the system allows for a purely passive approach to tracking so theres no need to manually tag anything, and the system adapts to changes as the models are trained to interpret the environment.

We havent seen many other folks yet with this sort of combination of really inexpensive hardware powered by machine learning, he says, when asked about the competitive landscape. I expect to see a lot more competition popping up over the next six months to a year. But I still think the space is pretty early.

And if you talk to anybody in this space particular real estate services space utilization people have been looking for solutions for this for years, literally And nobodys really come to the table yet with a solution thats flexible, simple to deploy yet really, really powerful.

We think this combination of machine learning AI on inexpensive hardware is going to be really powerful and unlock much opportunity, he adds. With computer vision youre just going to have so many different things that youre going to be able to train the models to detect and report back.

See more here:

VergeSense's AI sensing hardware tackles facility management - TechCrunch

Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life – Futurism

In BriefAstrophysicist Lord Martin Rees believes that AI could surpasshumans within a few hundred years, ushering in eons of dominationby electronic intelligent lifethe same kind of intelligent life hethinks may already exist elsewhere in the universe. Fixed to This World

Lord Martin Rees, Astronomer Royal and University of Cambridge Emeritus Professor of Cosmology and Astrophysics, believes that machines could surpasshumans within a few hundred years, ushering in eons of domination. He also cautions that while we will certainly discover more about the origins of biological life in the coming decades, we should recognize that alien intelligence may be electronic.

Just because theres life elsewhere doesnt mean that there is intelligent life, Lord Rees told The Conversation. My guess is that if we do detect an alien intelligence, it will be nothing like us. It will be some sort of electronic entity.

Rees thinks that there is a serious risk of a major setback of global proportions happening during this century, citing misuse of technology, bioterrorism, population growth, and increasing connectivity as problems that render humans more vulnerable now than we have ever been before. While we may be most at risk because of human activities, the ability of machines to outlast us may be a decisive factor in how life in the universe unfolds.

If we look into the future, then its quite likely that within a few centuries, machines will have taken overand they will then have billions of years ahead of them, he explains. In other words, the period of time occupied by organic intelligence is just a thin sliver between early life and the long era of the machines.

In contrast to the delicate, specific needs of human life, electronic intelligent life is well-suited to space travel and equipped to outlast many global threats that could exterminate humans.

[We] are likely to be fixed to this world. We will be able to look deeper and deeper into space, but traveling to worlds beyond our solar system will be a post-human enterprise, predicts Rees. The journey times are just too great for mortal minds and bodies. If youre immortal, however, these distances become far less daunting. That journey will be made by robots, not us.

Rees isnt alone in his ideas. Several notable thinkers, such as Stephen Hawking, agreethat artificial intelligences (AI) have the potential to wipe out human civilization. Others, such as Subbarao Kambhampati, the president of the Association for the Advancement of Artificial Intelligence, see malicious hacking of AI as the greatest threat we face. However, there are at least as many who disagree with these ideas, with even Hawking noting the potential benefits of AI.

As we train and educate AIs, shaping them in our own image, we imbue them with the ability to form emotional attachmentsthat could deter them from wanting to hurt us. There is evidence thatthe Singularity might not be a single moment in time, but is instead a gradual process that is already happeningmeaning that we are already adapting alongside AI.

But what if Rees is correct and humans are on track to self-annihilate? If we wipe ourselves out and AI is advanced enough to survive without us, then his predictions about biological life being a relative blip on the historical landscape and electronic intelligent life going on to master the universe will have been correctbut not because AI has turned on humans.

Ultimately, the idea of electronic life being uniquely well-suited to survive and thrive throughout the universe isnt that far-fetched. The question is, will we survive alongside it?

Originally posted here:

Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life - Futurism

AI super resolution lets you zoom and enhance in Pixelmator Pro – The Verge

The zoom and enhance trope is a TV clich, but advances in AI are slowly making it a reality. Researchers have shown that machine learning can enlarge low-resolution images, restoring sharpness that wasnt there before. Now, this technology is making its way to consumers, with image editor Pixelmator among the first to offer such a feature.

The Photoshop competitor today announced what it calls ML Super Resolution for the $60 Pro version of its software: a function that the company says can scale an image up to three times its original resolution without image defects like pixelation or blurriness.

After our tests, we would say this claim needs a few caveats. But overall, the performance of Pixelmators super resolution feature is impressive.

Pixelation is smoothed away in a range of images, from illustration to photography to text. The results are better than those delivered by traditional upscaling algorithms, and although the process is not instantaneous (it took around eight seconds per image on our 2017 MacBook Pro), its fast enough to be a boon to designers and image editors of all stripes. There are some examples below from Pixelmator, with a zoomed-in low resolution image on the left, and the processed ML Super Resolution image on the right:

You can see more images over on Pixelmators blog, including comparisons with traditional upscaling techniques like the Bilinear, Lanczos, and Nearest Neighbor algorithms. While ML Super Resolution isnt a magic wand, it does deliver consistently impressive results.

Research into super resolution has been ongoing for some time now, with tech companies like Google and Nvidia creating their own algorithms in the past few years. In each case, the software is trained on a dataset containing pairs of low-resolution and high-resolution images. The algorithm compares this data and creates rules for how the pixels change from image to image. Then when its shown a low-resolution picture its never seen before, it predicts what extra pixels are needed and inserts them.

Pixelmators creators told The Verge that their algorithm was made from scratch in order to be lightweight enough to run on users devices. Its just 5MB in size, compared to research algorithms that are often 50 times larger. Its trained on a range of images in order to anticipate users different needs, but the training dataset is surprisingly small just 15,000 samples were needed to create Pixelmators ML Super Resolution tool.

The company isnt the first to offer this technology commercially. There are a number of single-use super resolution tools online, including BigJPG.com and LetsEnhance.io. In our tests, the output from these sites was of a more mixed quality than Pixelmators (though it was generally good), and free users can only process a small number of images. Adobe has also released a super resolution feature, but the results are, again, less dramatic.

Overall, Pixelmator seems to be offering the best commercial super resolution tool weve seen (let us know in the comments if you know of a better one), and every day, zoom and enhance becomes less of a joke.

Correction: An earlier version of this story included comparisons between images that had been non-destructively downsized then upscaled using Pixelmators ML Super Resolution, resulting in unrealistically improved results. These have been removed. We regret the error.

See the original post:

AI super resolution lets you zoom and enhance in Pixelmator Pro - The Verge

How Social Media Is Using AI to Fight Terrorism – Motley Fool

Once upon a time, terrorists used bombs, machetes, and bullets to get their message across. While that's still the case, modern day terror has a new tool at its disposal, one that it has become particularly adept and successful at deploying: social media. This stark reality has come to light in the wake of terror campaigns that ended with participants pledging their support to their chosen causes and posting them on social media platforms.

Other insidious forms of communication and objectionable material have flourished in the internet era as well. Hate speech and violent threats have found homes there. Governments and advertisers worldwide are aware of the phenomenon and are increasingly pressuring social-media companies like Facebook, Inc. (NASDAQ:FB), Alphabet Inc. (NASDAQ:GOOGL) (NASDAQ:GOOG), Twitter, Inc. (NYSE:TWTR), and Microsoft Corporation (NASDAQ:MSFT) to police undesirable content on their sites.

The sheer volume of content and the differences and complexity of local laws and regulations conspired to create a near-insurmountable task for these sites. However, recent advances in artificial intelligence (AI) are being brought to bear, and producing surprisingly effective results.

Facebook is deploying AI to fight terror. Image source: Facebook.

Facebook revealed that new AI algorithms based on image recognition have been deployed to assist with the Herculean chore. One tool has been developed to scan the site for images and live videos containing terrorist propaganda, including beheadings, and to remove them without the intercession of a human moderator.

Another system has been trained to identify accounts that have been set up by terrorists, and prevent them from setting up additional accounts. Another algorithm is being trained in the language of propaganda to help identify posts related to terror. Once the content has been identified and removed, the system catalogs the data, then consistently scans the site and identifies attempts to repost it.

Twitter has been deploying similar tools based on AI for rooting out terrorist content. The company says that these methods flagged 74% of the nearly 377,000 accounts it removed between July and December of 2016.

This follows an alliance by some of the biggest names in tech circles late last year to create a database of the worst content, to prevent it from being reposted on any of the sites. YouTube, Twitter, and Facebook joined Microsoft in the venture to create unique digital identifiers, or "fingerprints," to use for automatically detecting and removing content that had previously been tagged as terrorist propaganda.

Microsoft developed and deployed similar technology to battle child pornography on the internet. The system was used to detect, report, and remove the images contained in a database.

Big tech is bringing AI to the fight on terror. Image source: Getty Images.

Google, the Alphabet subsidiary and owner of YouTube, is a pioneer in AI and recently found another way to use the nascent technology. YouTube faced a massive boycott from some of its biggest advertisers after it was revealed that brand advertising had appeared on YouTube videos containing racist, homophobic, anti-Semitic, and terrorist content. The company applied new AI techniques to the task, and within weeks achieved a 500% improvement in identifying objectionable content. YouTube revealed that more than half the content it removed over the previous six months for containing terrorist-related material had been identified using AI.

The world is a complicated place, and new technology brings new challenges. The advent of social media brought the world closer together, for better or for worse. Artificial intelligence is still a nascent technology, and while it isn't a panacea, it is being used in a variety of ways that make the world a better place.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors; LinkedIn is owned by Microsoft. Danny Vena owns shares of Alphabet (A shares) and Facebook. Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares) and short January 2018 $650 calls on Alphabet (C shares). The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Facebook, and Twitter. The Motley Fool has a disclosure policy.

Link:

How Social Media Is Using AI to Fight Terrorism - Motley Fool

IBM ‘woke up the AI world,’ CEO Ginni Rometty says – CNBC

The conversation in the technology community about artificial intelligence was first rekindled by manufacturing giant IBM and its AI platform, Watson, CEO Ginni Rometty said on Tuesday.

"We are the ones that woke up the AI world here again," Rometty told "Mad Money" host Jim Cramer in a wide-ranging interview about Washington, Warren Buffett and her business.

Rometty said that the key to her century-old company remaining an institution in this country is how many times it has been able to reinvent itself and follow the latest trends in tech.

Today, those trends are the cloud and artificial intelligence, which IBM employees refer to as "cognitive" programming.

"There's a reason we call it cognitive," Rometty told Cramer. "It's about augmenting what you and I do so we can do what we're supposed to, our best. And then that's the IBM that takes that technology and the know-how about how the world works and puts that together and actually changes business. We are the champion for business."

Rometty also stressed the distinction between AI that consumers see and use and IBM's AI area of expertise, business-oriented AI programs.

"Consumer AI in your home, it's typically speech detects to a search. That's fine. That's great," she said. "But we deal in the enterprise world, so this is training Watson. Watson is trained in industries: What does underwriting do? What does a tax preparer do? What does a doctor do? What does a customer service agent do? What does a repairperson do? And it helps them be better, and, in fact, helps them do their job."

Between IBM's dealings in cognitive programming and the internet of things, Rometty predicted that 1 billion people would interact with Watson by the end of 2017. The result would be a boon to IBM's latest transformation efforts, which Rometty said revolve around one word: data.

The CEO said that Watson will likely also play a monumental role in reforming the health care system, the first step of which would involve touching the lives of 20,000 cancer patients.

"We will be able to address, diagnose and treat 80 percent of what causes 80 percent of the cancer in the world. If that's not motivating, I don't know what is," Rometty told Cramer.

Questions for Cramer? Call Cramer: 1-800-743-CNBC

Want to take a deep dive into Cramer's world? Hit him up! Mad Money Twitter - Jim Cramer Twitter - Facebook - Instagram - Vine

Questions, comments, suggestions for the "Mad Money" website? madcap@cnbc.com

Read more here:

IBM 'woke up the AI world,' CEO Ginni Rometty says - CNBC

School district upholds decision; AI’s season over – The News Journal

The News Journal Published 11:23 a.m. ET Feb. 24, 2017 | Updated 7 hours ago

Red Clay School District has upheld a decision by A.I. du Pont High School Principal to remove the boys basketball team from consideration for the upcoming DIAA state tournament. 2/24/17 Damian Giletto/The News Journal

1 of 3

DMA's commandant, Anthony Pullella, responds to accusations his students provoked an incident between A.I. players and fans during a basketball game last week. 2/24/17 JOHN J. JANKOWSKI JR./SPECIAL TO THE NEWS JOURNAL

2 of 3

Red Clay Consolidated School District will review a decision by the A.I. du Pont High School principal to ban the boys basketball team from participating in the upcoming DIAA Boys Basketball Tournament. 2/23/17 Damian Giletto/The News Journal

3 of 3

Red Clay upholds A.I. principal's decision to end season

DMA commandant responds to incident at A.I. basketball game

A.I. du Pont principal, parents meet over basketball suspensions

A.I. duPont High School principal Kevin Palladinetti tries to answer questions from parents and political leaders about an incident after the team's 58-46 loss at Delaware Military Academy last Thursday that lead the team from participating in the upcoming DIAA Boys Basketball Tournament.(Photo: Jennifer Corbett, The News Journal )Buy Photo

Red Clay Consolidated School District has upheld a decision by A.I. du Pont High School Principal KevinPalladinetti to remove the boys basketball team from consideration for the upcoming DIAA state tournament.

"We understand it was a difficult decision by staff at A.I. High School but we support that decision and stand behind it, said Superintendent Merv Daugherty. The district believes the disciplinary consequence fits the seriousness of the incident.

Jen Field, whose son is a senior on the team, told The News Journal that a group of those opposed toPalladinetti's decision will meet Friday night to discuss what, if any, next steps they will take.

Palladinettis decision stemmed from an incident following the Tigers loss at Delaware Military Academy on Feb. 16.

With 40 seconds left, an A.I. player was given a technical foul. At that point, A.I. head coach Tom Tabb said he told the players on the bench to skip the customary postgame handshake line. Instead, the coach told the team he would shake hands with the DMA team, and the players were to remain behind him and follow him off the court as a group.

When the game was over, a player started to walk and then sprinted, which caused a chain reaction where the other players followed, the coaches followed, parents followed, some DMA parents followed, Tabb said Thursday.

RELATED: More on the incident and reaction

FOOTBALL: Middletown product eyes NFL

Officials from both schools said the A.I. players ran toward a stairwell leading to thesecond level of the gymnasium, whereDMA students and fans had been watching the game.

DMA officials said they blocked the players from accessing the mezzanine while another teacher directed DMA students out through an emergency door.

Several parents of A.I. du Pont players have alleged that racial slurs were spoken by DMA players, fans and students during the game. But Palladinetti said Tabb, Assistant Principal Damon Saunders (both of whom are black)and the other A.I. assistant coaches did not report hearing any racial slurs.

DMA Commandant Anthony Pullella was at the game and said he did not hear any racial comments. Michael Ryan, the athletic director, said DMA officials conducted their own investigation, questioning parents, players, coaches and fans. He said no evidence was uncovered about any racial comment being used.

In a statement issued Friday, Red Clay officials said the district will also "continue to work with DMA to investigate allegations of inappropriate actions by their players and fans. The district has requested that DMA administration investigate from their school. Red Clay also requested a formal investigation from DIAA about the conduct of the fans during the AIHS/DMA game. We will share all investigative findings concerning fan conduct when we receive them from DMA and DIAA."

The district is taking the claims of inappropriate behavior from game attendees very seriously, Daugherty said. We do not condone the behavior in any way and will continue to work closely with DMA to uncover any acts of impropriety.

View original post here:

School district upholds decision; AI's season over - The News Journal

Apple AI expert, Tom Gruber explains Siri’s ‘humanistic AI’ at TED – 9to5Mac

Apples AI expert, Tom Gruber, delivered a TED talk back in April extolling the benefits that AI may provide for us in the years to come. The video of the onstage presentation has now been released and gives us a better glimpse into the future Gruber imagines. His presentation focuses on what he calls humanistic AI, the belief that when machines get smarter, so will we.

Gruber explains that the purpose of AI is to empower humans with machine intelligence and that the two can work together effectively. In his talk, he goes through various examples of how AI can be used to improve upon normal human functions and interactions.Starting with Siri, Gruber explains that the virtual assistant was designed as humanistic AI.The assistant may not be world-changing for some users, but for others it quite literally is.

To augment people with a conversational interface that made it possible for them to use mobile computing regardless of who they were and their abilities.

for my friend Daniel, the impact of the AI in these systems is a lifechanger. You see Daniel is a really social guy and hes blind and quadriplegic which makes it hard to use those devices that we all take for granted.

Daniel uses Siri to manage his own social life, his email, text, and phone, without depending on his caregivers. Heres a man whose relationship with AI helps him have relationships with genuine human beings.

Through the rest of his onstage conversation, Gruber continues with more examples of the benefits of human and machine intelligence collaborations. Showing improvements in cancer detection, engineering, and even human memory. Gruber believes that AI will benefit all of us in some manner.

He is clear to point out that the usage of AI to improve human memory must be kept private and secure. In myview, a personal memory is a private memory, he shares. We get to choose what is and is not recalled and retained. Its absolutely essential that this be kept very secure.

Watch Tom Grubers entireon stage TED presentation below.

Original post:

Apple AI expert, Tom Gruber explains Siri's 'humanistic AI' at TED - 9to5Mac

New Deals May Double $240M Funding For MIT-IBM AI Lab, Director Says – Xconomy

When MIT and IBM launched a joint research lab in 2017, the New York-headquartered company pledged $240 million over a decade to chip away at the fundamental obstacles keeping artificial intelligence from transforming industries like healthcare and cybersecurity.

Now, the Cambridge, MA-based MIT-IBM Watson AI Lab is growing in personnel, funding, partners, and square feet, says David Cox, a former Harvard professor who leads the program for IBM (NYSE: IBM).

This week, the lab welcomed its first batch of outside partners. They include South Korea conglomerate Samsung, medical device company Boston Scientific (NYSE: BSX), construction tech firm Nexplore, and financial data provider Refinitiv. Cox said there is a healthy pipeline of other companies that may tap into the labs expertise.

The investment that each partner brings to the lab could lead the joint research venture to end up with double the $240 million it expected from the outset. The funds will help researchers work on making artificial intelligence more autonomous and easier to apply to real-world problems.

We could imagine as much as doubling that investment over time as the program grows, Cox said, declining to share financial figures for how much the partners are chipping into the endeavor. We can substantially increase the scale of the investment we make in MIT.

Cox explained, though, that the final list of partners wont be too extensive.

We dont foresee it being an extremely large program, he said. There are other membership models where you pay a very small amount of money, and everybody is part of it. We really want to have a relatively small number in deep, deep engagements.

In terms of research staff, the numbers are also higher than initially planned. The labs charter envisioned the equivalent of 100 full-timers. The reality now is the lab has 70 projects underway, and MIT and IBM both provide at least one staffer per project, Cox said. (Back in March 2019, Cox told Xconomy the lab had 49 research projects up and running.)

To accommodate the bigger team, the lab is moving a few blocks south to the heart of Kendall Square. The new home for the lab will be 314 Main Street, a 440,000-square-foot MIT building under construction that is expected to also be the home of the MIT Museum, the MIT Press Bookstore, and Boeings Aurora Flight Sciences research unit. Cox said work on the building will be completed next year.

Cox said one of the best mile markers for the labs progress is publications, and so far researchers have got 110 papers in journals.

The lab is likely to keep its focus on healthcare and cybersecurity, both local concentrations for IBMs Watson business, Cox said. Teams will also continue their research into how artificial intelligence systems function at their core.

An appealing angle on fundamental AI research for the MIT-IBM teams has been something called neural-symbolic AI, which combined the popular deep learning technologies with symbolic reasoning techniques needed to learn abstract concepts and solve problems, Cox said. Combining the two could help develop more flexible AI systems that call for less hand-holding from humans to clean data, tweak algorithms, and carefully set up a framework for the AI system to explore.

The vision for neural-symbolic AI systems would be one that could think a lot more like humans think, need small sets of data to understand abstract concepts and be more transparent in the decision-making process.

Whats interesting about that is its a progression where less and less is predefined, Cox says, and the system is more and more genuinely autonomous.

Brian Dowling is a Senior Editor at Xconomy, based in Boston. You can reach him at bdowling [at] xconomy.com.

Visit link:

New Deals May Double $240M Funding For MIT-IBM AI Lab, Director Says - Xconomy