What does Austria’s new governing coalition mean for migrants? – InfoMigrants

Austria's first government coalition between the conservative Peoples Party and the Green Party will have ripple effects on a range of issues including immigration. The two parties took nearly three months to iron out their disagreements -- with migration being the key issue that sets the unlikely bedfellows apart.

Austria's new governing coalition, consisting of conservative leader Sebastian Kurz's Peoples Party (OeVP) and the Green Party marks the first of its kind in the Alpine country.

Though Prime Minister Sebastian Kurz said that the coalition deal with the Greens offered ''the best of both worlds'' between the right-leaning OeVP and the left-environmental Greens, this marriage-of-inconvenience will be put to the test repeatedly when it comes to a series of central issues the two parties will have to address as a coalition government -- and migration is just one of them.

Migration 'at heart' of Kurz' politics

Kurz, who will be returning to the prime ministerial post, intends to continue his tough stance on immigration, which has helped his party garner votes from more right-leaning voters. "Migration will stay at the heart of my politics," said Kurz while setting out the government agenda, which was published in a 300-page document.

Kurz, who at age 33 is the world's youngest prime minister once more, stressed, for example, that migrants rescued in the Mediterranean should be taken to "safe countries of origin, third countries or transit countries, if they are safe" instead of EU ports.

He also added that, in his view, efforts to redistribute migrants within Europe had failed. Though his policies may not quite echo those of several EU countries that have practically shut themselves off to non-EU immigration since the onset of the so-called refugee crisis in 2015 (such as Hungary and Poland), Kurz' views are not only a long way from various other EU countries that have taken a more liberal stance towards migrants, such as neighboring Germany, but also from his new coalition partners, the Greens.

Preventive custody

Following the September 29 election, the two parties have been hammering out their coalition agreement for several weeks, with migration being one of the main sticking points between the OeVP and the Green Party with its more pluralistic views. With inclusive, open-door policies towards migrants being the antithesis of what Kurz stands for, Austria might become increasingly less attractive for immigrants, especially those who arrive in the country using irregular migration.

Kurz has made clear that he wants to increase checks on asylum seekers and root out elements considered to be harmful to the country. Several Austrian newspapers reported that as part of the government agenda, Kurz' government intends to introduce preventive custody for potentially dangerous immigrants. This would even apply to immigrants who had not committed a crime on Austrian or EU soil.

A similar proposal had already been put forward by Kurz' previous coalition government with the far-right FPOe after a fatal stabbing committed by an asylum seeker in February 2019.

Kurz has repeatedly made comments in the past of wanting to fight the spread of "political Islam" in Austria in particular, using such measures as extending grounds for detention and lowering the threshold of criteria required for deportation. In a tweet published at the start of the new year, his party said that it would continue "the fight against illegal migration and stop the forming of parallel societies and political Islam" in Austria.

New immigration strategy

Among a raft of proposals, the coalition also intends to introduce a "new immigration strategy" with the ultimate goal of separating work-based immigration from those seeking asylum, while also making access to the labor market easier for migrant laborers.

The reality of introducing any changes to Austria's immigration system will likely take many months of negotiation not only between the two coalition government parties but also with the opposition, which will be keen to highlight weaknesses in the arrangement between the OeVP and the Green Party both from the left and the right. Any overhaul to immigration laws will also have to comply with EU law.

Despite these safeguards, there are many native Austrian who would welcome significant changes to the country's immigration laws. Critical views towards migrants are quite common in the country; a recent poll by IPSOS revealed that Austrians believed thatover a third of their population was made up of people with a foreign background-- while in truth, the percentage of Austrias immigrant population stands at 16%.

Extended headscarf ban for youths

Among the more headline-grabbing, migration-related issues is the pending introduction of a headscarf ban on girls under the age of 14 at state schools.

There already is a ban on headscarves in place, which at the moment only applies to students in kindergartens and elementary schools. Its expansion is likely to cause opposition among the Greens, as they pursue a more inclusive attitude toward multiculturalism.

Headscarf bans and other issues related to religious clothing have repeatedly become bones of contention in the Alpine nation over the years, where urban populations of cities like Vienna and Graz tend to take more pluralistic views while the rural communities express that they feel Christianity is an important aspect of their national identity.

Constitutional integrity comes first

Any proposed legal changes will have to pass through the Austrian parliament, in which Kurz' OeVP and the Greens together have a majority. However, Green Party parliamentarians could still in future raise their opposition in key sticking points (in defiance of the coalition agreement), with migration being a likely candidate for disagreement between the two parties.

Furthermore, even if any such changes or compromises thereof were to be passed by parliament, Austria's Constitutional Court could still throw a spanner in the works by undoing newly enacted laws.

Last year, Kurz suffered a considerable defeat when the country's highest court ruled that measures designed toreduce welfare payments for immigrants failing to learn Germanwere unconstitutional.

The politics of politics

While making good on campaign promises against immigration appeals to Kurz' voter base and even helps expand it, the reality for the country with a population of less than 9 million people is that very few people on the ground ever are affected by sweeping changes to immigration regulations.

However, such tougher immigration guidelines might rather have deterrent effects on people planning to come to Austria, which due to its relatively small population size has one of the biggest migrant populations (in relation to its native population) within the EU.

The majority of immigrants to Austria with legal status, however, come from other countries within Europe; according to the latest Austrian government statistics, nearly 1 million immigrants to Austria come from Europe, compared to 117,000 from Turkey, 50,000 from Syria and 44,500 from Afghanistan.

with AP, AFP, dpa, Reuters

View post:

What does Austria's new governing coalition mean for migrants? - InfoMigrants

Columnist Razvan Sibii: The resistance, as organized by immigration lawyers – GazetteNET

Published: 1/5/2020 3:00:39 PM

Modified: 1/5/2020 3:00:11 PM

Throughout 2019, the journalists working the immigration beat have struggled to keep up with the near-daily indignities that the Trump administration has visited on the migrants seeking admission into the U.S. One byproduct of that is that many worthy stories about people fighting back against those indignities have been under-covered. Here are two such stories.

In the summer of 2014, as the so-called surge of families and unaccompanied minors overwhelmed U.S. Immigration and Customs Enforcement, the Obama administration decided to detain hundreds of families instead of releasing them conditionally until their cases could be heard in immigration court.

Megan Kludt, now a partner with the Northampton-based immigration law firm of Curran, Berger & Kludt, volunteered at the border helping people imprisoned in a makeshift holding facility in Artesia, New Mexico. The detention of children was unprecedented, and at the time, felt like an absolutely off-the-charts violation of human rights, Kludt says.

Upon returning to the Pioneer Valley, she joined forces with the ACLU of Massachusetts Immigrant Protection Project connecting local immigrants with attorneys. In 2018, the fresh hell unleashed by the Trump administrations family separation policy brought Kludts focus back to the southern border. She now works with the El Paso Immigration Collaborative (EPIC), an alliance of several non-governmental organizations and law firms around the country, on the biggest challenge currently facing immigration advocates: helping detained migrants make a case in front of an immigration judge or an ICE officer that they are not a danger to the community or a flight risk, and can therefore be released until their case is decided. (Disclosure: Kludt occasionally guest-speaks to my UMass classes for a nominal fee.)

Local organizations do the best they can, Kludt says, but they have a hard time reaching everyone who needs help. Using a specially designed case management system and a production line approach to its work, EPIC is able to help thousands of people document their ties to the U.S. by contacting their family members or friends who have agreed to sponsor them, posting bond, and preparing parole requests. They also collect data about ICE practices that can then be used in lawsuits. More than 1,000 attorneys and volunteers, many of them fluent in Spanish, French or Portuguese, contribute to this massive effort remotely.

Our goal is to provide service and to try to release as many people as possible, but if were not actually changing the system, were not really succeeding. So we also need to be constantly checking in about advocacy. What we want to see is policy changes, Kludt says. Its really a human rights crisis. Theres a lot of things that are going on under this administration that are really heartbreaking, but everyone has their place and what they can do. In my case, Im an immigration attorney, so this is my place, this is my stand at this time.

While collaboratives like EPIC have managed in recent years to deliver at least some assistance to many of the refugees detained in facilities across the United States, tens of thousands of individuals and families remain largely out of reach in improvised shelters to the south of the border because of the governments new Remain in Mexico policy. In the sad hierarchy of wretchedness, these people probably rate as the most vulnerable group of refugees, as they have to contend not only with miserable living conditions, but also with extortion, assault and even kidnapping.

Border Angels is one of the few U.S.-based outfits that have been able to consistently assist this category of people. For decades, the organization was best known for leaving water jugs in the desert areas of the border for migrants to find. They now also directly support 16 migrant shelters in Tijuana with donations collected from Americans, electricity and water bills, food, legal representation and bond.

That work is personal for Dulce Garcia, a Border Angels board member and a DACA recipient. Im still undocumented, even though I came here in 1987 when I was about 4 years old. Fast-forward to today: Im a property owner, a business owner, I have my own law practice, and Im also the executive director for this nonprofit. But no matter how much I pay in taxes, no matter how much I feel like Ive earned my keep, I still will never be a U.S. citizen the way the laws are today, Garcia says.

Her uncle died trying to cross the desert into the U.S. When she was in high school, her brother was detained by ICE, and now lives with a deportation order that will be enforceable as soon as DACA, or Deferred Action for Childhood Arrivals, is ended. In September of 2017, Garcia successfully sued the Trump administration in a bid to retain DACA protections. When the Supreme Court began hearing oral arguments on the legality of DACA in November 2019, Garcia was in attendance. But until the court, Congress and the American voter finally make their decisions, Garcia and the hundreds of volunteers she coordinates continue to fight back against inhumanity.

Interviewing migrants. Posting bond. Contacting family members. Drafting parole requests. Suing the government. Bringing toys and clothes to children stuck in migrant shelters. Leaving lifesaving water jugs in the desert. Paying electricity and water bills. They all chip away at the misery thousands of families are experiencing this winter.

Read this article:

Columnist Razvan Sibii: The resistance, as organized by immigration lawyers - GazetteNET

Why Neuro-Symbolic Artificial Intelligence Is The AI Of The Future – Digital Trends

Picture a tray. On the tray is an assortment of shapes: Some cubes, others spheres. The shapes are made from a variety of different materials and represent an assortment of sizes. In total there are, perhaps, eight objects. My question: Looking at the objects, are there an equal number of large things and metal spheres?

Its not a trick question. The fact that it sounds as if it is is proof positive of just how simple it actually is. Its the kind of question that a preschooler could most likely answer with ease. But its next to impossible for todays state-of-the-art neural networks. This needs to change. And it needs to happen by reinventing artificial intelligence as we know it.

Thats not my opinion; its the opinion of David Cox, director of the MIT-IBM Watson A.I. Lab in Cambridge, MA. In a previous life, Cox was a professor at Harvard University, where his team used insights from neuroscience to help build better brain-inspired machine learning computer systems. In his current role at IBM, he oversees work on the companys Watson A.I. platform.Watson, for those who dont know, was the A.I. which famously defeated two of the top game show players in history at TV quiz show Jeopardy. Watson also happens to be a primarily machine-learning system, trained using masses of data as opposed to human-derived rules.

So when Cox says that the world needs to rethink A.I. as it heads into a new decade, it sounds kind of strange. After all, the 2010s has been arguably the most successful ten-year in A.I. history: A period in which breakthroughs happen seemingly weekly, and with no frosty hint of an A.I. winter in sight.This is exactly why he thinks A.I. needs to change, however. And his suggestion for that change, a currently obscure term called neuro-symbolic A.I., could well become one of those phrases were intimately acquainted with by the time the 2020s come to an end.

Neuro-symbolic A.I. is not, strictly speaking, a totally new way of doing A.I. Its a combination of two existing approaches to building thinking machines; ones which were once pitted against each as mortal enemies.

The symbolic part of the name refers to the first mainstream approach to creating artificial intelligence. From the 1950s through the 1980s, symbolic A.I. ruled supreme. To a symbolic A.I. researcher, intelligence is based on humans ability to understand the world around them by forming internal symbolic representations. They then create rules for dealing with these concepts, and these rules can be formalized in a way that captures everyday knowledge.

If the brain is analogous to a computer, this means that every situation we encounter relies on us running an internal computer program which explains, step by step, how to carry out an operation, based entirely on logic. Provided that this is the case, symbolic A.I. researchers believe that those same rules about the organization of the world could be discovered and then codified, in the form of an algorithm, for a computer to carry out.

Symbolic A.I. resulted in some pretty impressive demonstrations. For example, in 1964 the computer scientist Bertram Raphael developed a system called SIR, standing for Semantic Information Retrieval. SIR was a computational reasoning system that was seemingly able to learn relationships between objects in a way that resembled real intelligence. If you were to tell it that, for instance, John is a boy; a boy is a person; a person has two hands; a hand has five fingers, then SIR would answer the question How many fingers does John have? with the correct number 10.

there are concerning cracks in the wall that are starting to show.

Computer systems based on symbolic A.I. hit the height of their powers (and their decline) in the 1980s. This was the decade of the so-called expert system which attempted to use rule-based systems to solve real-world problems, such as helping organic chemists identify unknown organic molecules or assisting doctors in recommending the right dose of antibiotics for infections.

The underlying concept of these expert systems was solid. But they had problems. The systems were expensive, required constant updating, and, worst of all, could actually become less accurate the more rules were incorporated.

The neuro part of neuro-symbolic A.I. refers to deep learning neural networks. Neural nets are the brain-inspired type of computation which has driven many of the A.I. breakthroughs seen over the past decade. A.I. that can drive cars? Neural nets. A.I. which can translate text into dozens of different languages? Neural nets. A.I. which helps the smart speaker in your home to understand your voice? Neural nets are the technology to thank.

Neural networks work differently to symbolic A.I. because theyre data-driven, rather than rule-based. To explain something to a symbolic A.I. system means explicitly providing it with every bit of information it needs to be able to make a correct identification. As an analogy, imagine sending someone to pick up your mom from the bus station, but having to describe her by providing a set of rules that would let your friend pick her out from the crowd. To train a neural network to do it, you simply show it thousands of pictures of the object in question. Once it gets smart enough, not only will it be able to recognize that object; it can make up its own similar objects that have never actually existed in the real world.

For sure, deep learning has enabled amazing advances, David Cox told Digital Trends. At the same time, there are concerning cracks in the wall that are starting to show.

One of these so-called cracks relies on exactly the thing that has made todays neural networks so powerful: data. Just like a human, a neural network learns based on examples. But while a human might only need to see one or two training examples of an object to remember it correctly, an A.I. will require many, many more. Accuracy depends on having large amounts of annotated data with which it can learn each new task.

That makes them less good at statistically rare black swan problems. A black swan event, popularized by Nassim Nicholas Taleb, is a corner case that is statistically rare. Many of our deep learning solutions today as amazing as they are are kind of 80-20 solutions, Cox continued. Theyll get 80% of cases right, but if those corner cases matter, theyll tend to fall down. If you see an object that doesnt normally belong [in a certain place], or an object at an orientation thats slightly weird, even amazing systems will fall down.

Before he joined IBM, Cox co-founded a company, Perceptive Automata, that developed software for self-driving cars. The team had a Slack channel in which they posted funny images they had stumbled across during the course of data collection. One of them, taken at an intersection, showed a traffic light on fire. Its one of those cases that you might never see in your lifetime, Cox said. I dont know if Waymo and Tesla have images of traffic lights on fire in the datasets they use to train their neural networks, but Im willing to bet if they have any, theyll only have a very few.

Its one thing for a corner case to be something thats insignificant because it rarely happens and doesnt matter all that much when it does. Getting a bad restaurant recommendation might not be ideal, but its probably not going to be enough to even ruin your day. So long as the previous 99 recommendations the system made are good, theres no real cause for frustration. A self-driving car failing to respond properly at an intersection because of a burning traffic light or a horse-drawn carriage could do a lot more than ruin your day. It might be unlikely to happen, but if it does we want to know that the system is designed to be able to cope with it.

If you have the ability to reason and extrapolate beyond what weve seen before, we can deal with these scenarios, Cox explained. We know that humans can do that. If I see a traffic light on fire, I can bring a lot of knowledge to bear. I know, for example, that the light is not going to tell me whether I should stop or go. I know I need to be careful because [drivers around me will be confused.] I know that drivers coming the other way may be behaving differently because their light might be working. I can reason a plan of action that will take me where I need to go. In those kinds of safety-critical, mission-critical settings, thats somewhere I dont think that deep learning is serving us perfectly well yet. Thats why we need additional solutions.

The idea of neuro-symbolic A.I. is to bring together these approaches to combine both learning and logic. Neural networks will help make symbolic A.I. systems smarter by breaking the world into symbols, rather than relying on human programmers to do it for them. Meanwhile, symbolic A.I. algorithms will help incorporate common sense reasoning and domain knowledge into deep learning. The results could lead to significant advances in A.I. systems tackling complex tasks, relating to everything from self-driving cars to natural language processing. And all while requiring much less data for training.

Neural networks and symbolic ideas are really wonderfully complementary to each other, Cox said. Because neural networks give you the answers for getting from the messiness of the real world to a symbolic representation of the world, finding all the correlations within images. Once youve got that symbolic representation, you can do some pretty magical things in terms of reasoning.

For instance, in the shape example I started this article with, a neuro-symbolic system would use a neural networks pattern recognition capabilities to identify objects. Then it would rely on symbolic A.I. to apply logic and semantic reasoning to uncover new relationships. Such systems have already been proven to work effectively.

Its not just corner cases where this would be useful, either. Increasingly, it is important that A.I. systems are explainable when required. A neural network can carry out certain tasks exceptionally well, but much of its inner reasoning is black boxed, rendered inscrutable to those who want to know how it made its decision. Again, this doesnt matter so much if its a bot that recommends the wrong track on Spotify. But if youve been denied a bank loan, rejected from a job application, or someone has been injured in an incident involving an autonomous car, youd better be able to explain why certain recommendations have been made. Thats where neuro-symbolic A.I. could come in.

A few decades ago, the worlds of symbolic A.I. and neural networks were at odds with one another. The renowned figures who championed the approaches not only believed that their approach was right; they believed that this meant the other approach was wrong. They werent necessarily incorrect to do so. Competing to solve the same problems, and with limited funding to go around, both schools of A.I. appeared fundamentally opposed to each other. Today, it seems like the opposite could turn out to be true.

Its really fascinating to see the younger generation, Cox said. Most of my team are relatively junior people: fresh, excited, fairly recently out of their Ph.Ds. They just dont have any of that history. They just dont care [about the two approaches being pitted against each other] and not caring is really powerful because it opens you up and gets rid of those prejudices. Theyre happy to explore intersections They just want to do something cool with A.I.

Should all go according to plan, all of us will benefit from the results.

Read more from the original source:

Why Neuro-Symbolic Artificial Intelligence Is The AI Of The Future - Digital Trends

Welcome to the roaring 2020s, the artificial intelligence decade – GreenBiz

This article first appeared in GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here.

Ive long believed the most profound technology innovations are ones we take for granted on a day-to-day basis until "suddenly" they are part of our daily existence, such as computer-aided navigation or camera-endowed smartphones. The astounding complexity of whats "inside" these inventions is what makes them seem simple.

Perhaps thats why Im so fascinated by the intersection of artificial intelligence and sustainability: the applications being made possible by breakthroughs in machine learning, image recognition, analytics and sensors are profoundly practical. In many instances, the combination of these technologies completely could transform familiar systems and approaches used by the environmental and sustainability communities, making them far smarter with far less human intervention.

Take the camera trap, a pretty common technique used to study wildlife habits and biodiversity and one that has been supported by an array of big-name tech companies. Except what researcher has the time or bandwidth to analyze thousands, let alone millions, of images? Enter systems such as Wildlife Insights, a collaboration between Google Earth and seven organizations, led by Conservation International.

Wildlife Insights is, quite simply, the largest database of public camera-trap images in the world it includes 4.5 million photos that have been analyzed and mapped with AI for characteristics such as country, year, species and so forth. Scientists can use it to upload their own trap photos, visualize territories and gather insights about species health.

Heres the jaw-dropper: This AI-endowed database can analyze 3.6 million photos in an hour, compared with the 300 to 1,000 images that you or I can handle. Depending on the species, the accuracy of identification is between 80 and 98.6 percent. Plus, the system automatically discounts shots where no animals are present: no more blanks.

Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world.

At the same time, we are certainly right to be cautious about the potential side effects of AI. That theme comes through loud and clear in five AI predictions published by IBM in mid-December. Two resonate with me the most: first, the idea that AI will be instrumental in building trust and ensuring that data is governed in ways that are secure and reliable; and second, that before we get too excited about all the cool things AI might be able to do, we need to make sure that it doesnt exacerbate the problem. That means spending more time focused on ways to make the data centers behind AI applications less energy-intensive and less-impactful from a materials standpoint.

From an ethical standpoint, I also have two big concerns: first, that sufficient energy is put into ensuring that the data behind the AI predictions we will come to rely on more heavily isnt flawed or biased. That means spending time to make sure a diverse set of human perspectives are represented and that the numbers are right in the first place. And second, we must view these systems as part of the overall solution, not replacements for human workers.

As IBMs vice president of AI research, Sriram Raghavan, puts it: "New research from the MIT-IBM Watson AI Lab shows that AI will increasingly help us with tasks such as scheduling, but will have a less direct impact on jobs that require skills such as design expertise and industrial strategy. Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world; employers have to start adapting job roles, while employees should focus on expanding their skills."

Projections by tech market research firm IDC suggest that spending on AI systems could reach $97.9 billion in 2023 thats 2.5 times the estimated $37.5 billion spent in 2019. Why now? Its a combination of geeky factors: faster chips; better cameras; massive cloud data-processing services. Plus, did I mention that we dont really have time to waste?

Where will AI-enabled applications really make a difference for environmental and corporate sustainability? Here are five areas where I believe AI will have an especially dramatic impact over the next decade.

For more inspiration and background on the possibilities, I suggest this primer (PDF) published by the World Economic Forum. And, consider this your open invitation to alert me about the intriguing applications of AI youre seeing in your own work.

Here is the original post:

Welcome to the roaring 2020s, the artificial intelligence decade - GreenBiz

A reality check on artificial intelligence: Can it match the hype? – PhillyVoice.com

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Dr. Jesse Ehrenfeld, who chairs the physician groups board of trustees.

In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Dr. Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation which is not affiliated with Kaiser Permanente.

See the rest here:

A reality check on artificial intelligence: Can it match the hype? - PhillyVoice.com

Top five projections in Artificial Intelligence for 2020 – Economic Times

There has been good and bad news of AI in the year 2019. Of course, Bad News always get preference and catches peoples minds. Some of the popular bad news in AI has been related to Fake news Generation, Creating Porn fakes from Social Media images, Autonomous Vehicle killing a pedestrian, AI systems attacking a production facility, Data biases creating problems in AI applications. In the good news we have seen innovative Healthcare related applications being deployed in Hospitals, AI tools helping specially abled people, Robots being used in increasing set of domains, AI assistants and smart devices guiding people in day to day queries and chores. The speed of evolution, adoption and research in AI is accelerating. It will be important and essential for the society to know what lies ahead on the road so that we are prepared for the worst and hopeful for the best.

AI will come out of the Data Conundrum

Although one of the main drivers for the success story of AI in the last decade has been the availability of exponentially increasing data; now the data itself is becoming one of the key barriers in developing futuristic applications using AI. Advancements in the study of human intelligence also show that our species is very effective in adapting to unseen situations which contrasts with the current capabilities of AI.

In the past year, there has been a significant activity in AI research to tackle this issue. Specific progress has been made inReinforcement Learning Techniques that take care of the limitations of supervised learning methods requiring huge amount of data. Deep Minds recent achievement is on top of the success stories in this domain. The Star Craft-II system developed by them taking the throne of the Grandmaster is a game changer and an indicator of the tremendous progress and potential of this technology.Generating data through Simulation in past year and it will grow at a much faster pace in 2020

For many complex applications, it is almost impossible to have data of all variety related to different phenomenon of that problem. For example, Autonomous vehicles, Healthcare, Space research, Prediction of natural disasters, Video Generation are some of the areas where high quality simulation data will be much more effective. In most of these cases real historical data will be too limited to predict new situations that can occur in the future. E.g. Space research is coming out with new inventions every day and nullifying old assumptions; in such a scenario, any AI application using historical data is bound to fail. However, Simulations of new possibilities with high precision software applications can alter the direction of AI applications in these domains.

Even in the cases where the applications are starving for additional training with local data; Public and Private organizations are coming forward to share and collaborate for data requirements. The Leaders are becoming more conversant with the requirements of AI and the mindset is changing.

All these factors combined will have a dramatic effect in 2020 and will bring the dependency of AI on data to a lower level. AI will come out of the Data Cage.

Machine Generated Content with Artificial Intelligence takes over Crowd Intelligence

We have seen the prototypes and demonstrations of content generating Robots in the form of user reviews, news stories, Celebrity images, Funny Videos, Music Compositions, Short stories and Artistic paintings. This is going to become sophisticated with the advancement in self-Supervised learning led by NVIDIA, Google and Microsoft; which are pushing the boundaries to new frontiers.Most popular Online Retail stores, Food Portals, Hotel & Travel aggregators etc. are based on customer reviews. Till now, these were written by real customers and real humans. Most of us were putting our faith in the crowd and take their reviews at face value. This has become a key component in driving new sales in different business segments. So, we were relying on crowd intelligence. But with these new content generation Robots all such businesses will be flooded with AI generated reviews and it will be very easy to fool the Customer.

Another Critical area is the opinion formation regarding various news, events and issues concerning the society. Social media, online campaigns, Messaging through different mobile apps has become a key resource to build the public sentiment on important issues. This is another area which is facing an immediate danger of Artificial machines taking over human beings to form opinion.Next year this trend will consolidate and there will be a visible effect on democratic Governments. AI may become a key driver and a primary campaigner for elections. Those organizations, Individuals or parties having AI supremacy will be able to win the elections and drive the world.

The world will speak and understand one Language: The Language of AI

With the tremendous success and improvements brought by BERT and GPT-2, the Language translation is coming of age. People talking to anyone outside their community will be talking through Language of AI Middleware. In 2019, we have already seen devices which can help you converse with people speaking other languages. The offerings are going to become more qualitative and inclusive. More and more languages are being added with an amazing pace in such conversational devices. Impact of such technologies coming in mass usage will result in plethora of applications being developed resulting in great impact on business and society. Movement of people, skills and knowledge across borders with different language speakers will become more common. This will also bring transformation in the cinema, performing arts and travel industries. This phenomenon will also affect Higher education sector and affect different countries in diverse ways. It can prove to be an economic bonanza, or a disaster based on the way the countries plan and embrace the changes. Proactive leadership that understands the future impacts of these technologies will be crucial to bring a considered transition of society and happy future of these countries.

AI Boost for the powerful and AI poison for the under-privileged Groups

AI is working in the same way for the powerful as Industrialization and Digitalization. People with resources are deploying and utilizing new age technologies to their advantage. They have resources to invest in new applications and become first adopters of technology. The power of Artificial Intelligence is being combined to optimize Manufacturing and Energy Production. It is also being used to increase the efficiency of distribution networks, delivery chain, connectivity. Every big business is at progressing to further increase the AI adoption including Airlines, Shipping Corporations, Mining Companies and Infrastructure Conglomerates. Eventually AI is further intensifying the divide between the haves and have-nots. Common people are becoming pawns in the hands of AI applications. Their privacy is under attack. As the cost of labor is devalued due to automation and new technologies, the wealth will be owned by a tiny percentage of the people in the world.

Genuine Voices, Groups and organizations should strive for development of Technology with a Human face. Already, the UN and other groups are working for Sustainable development Goals. Now, it is time that a proper framework is put in place involving all stakeholders so that the pace and direction of technology remains under the control of humanity. It will involve developing comprehensive moral, ethical, legal and societal ecosystems governing the use, development and deployment of AI tools, technologies and applications.

Crazy increase in Defense Budgets for AI enabled Weaponization

Few countries in the World are already in the advanced stages of developing Lethal Autonomous Warfare Systems. Sea Hunter, An Autonomous Unmanned Surface Vehicle for Anti-submarine Warfare is already operational. China is in the final stages of deploying army of Micro Swarm Drone Systems which can launch suicidal incognito attacks on adversary infrastructure. Other Permanent Members of the Security Council are working on Holistic Warfare Systems which are fully integrated with other functions of the Government. With complex set of adversaries in place, Israel is working to use AI as a force multiplier and to take fast decisions in the prevailing nebulosity of hybridity. AI also helps greatly in asymmetric warfare.

AI has unlimited potential to launch Cybersecurity attacks of complex nature which will require adversaries to have superior AI capabilities to counter. As major Financial systems of the world are online including banks and stock markets, they may become easy targets of Future AI systems for blackmailing and threatening Govts.In recent years we have seen significant increase in AI related defense budgets to help AI enabled weaponization. This is going to further accelerate in the coming year(s). Precision attack on individuals, distribution and infrastructure networks of the countries will be enhanced by AI. We have already seen a precision attack powered by US and Israeli Cooperation in Iraq, which resulted in killing of Irans Top Commander.

With all these trends in pipeline it will be vital for the organizations, Countries and the world to set their AI strategy in place. To have competent people who are expert in AI will be indispensable and essential for the survival in this new decade. We will need people who understand both the human and machine operated ecosystems and can make emotionally sound judgements which are in the benefit of humanity.

DISCLAIMER : Views expressed above are the author's own.

Read the original post:

Top five projections in Artificial Intelligence for 2020 - Economic Times

Can medical artificial intelligence live up to the hype? – Los Angeles Times

Health products powered by artificial intelligence are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, Topol said. Even the Food and Drug Administration which has approved more than 40 AI products in the last five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Some doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain.

In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech start-ups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval. None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and coauthor of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the last decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said.

The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

Some AI devices are more carefully tested than others. An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the test, sold as IDx-DR, right, said Dr. Michael Abramoff, the companys founder and executive chairman.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said coauthor Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said.

Google had no comment in response to Jhas conclusions.

This story was written for Kaiser Health News, an editorially independent publication of the Kaiser Family Foundation.

More:

Can medical artificial intelligence live up to the hype? - Los Angeles Times

How This Cofounder Created An Artificial Intelligence Styling Company To Help Consumers Shop – Forbes

Michelle Harrison Bacharach, the cofounder and CEO of FindMine, an AI styling company, has designed a technology, Complete the Look, that creates complete outfits around retailers products. It blends the art of styling with the ease of automation to represent a companys brand(s) at scale and help answer the question how do I wear this? The technology shows shoppers how to wear clothes with accessories. The company uses artificial intelligence to scale out the guidance that the retailer would provide. FindMine serves over 1.5 billion requests for outfits per year across e-commerce and mobile platforms, and AOV (average order value) and conversions by up to 150% with full outfits.

Michelle Bacharach, Cofounder and CEO of FINDMINE, an AI styling company.

I'm picky about user experiences, Bacharach explains. When I was a consumer in my life, shopping, I was always frustrated by the friction that it caused that I was sold a product in isolation. If I buy a scarf, what do I wear with the scarf? What are the shoes and the top and the jacket? Just answer that question for me when I buy the scarf. Why is it so hard? I started asking those questions as a consumer. Then I started looking into why retailers don't do that. It's because they have a bunch of friction on their side. They have to put together the shirt and the shoe and the pant and the bag and the jacket that go with that outfit. So, because it's manual, and they have tens of thousands of products and products come and go so frequently, it's literally impossible to keep up with. It's physically impossible for them to give an answer to every consumerMy hypothesis was that I would spend more money if they sold me all the other pieces and showed me how to use it. I started looking into [the hypothesis], and it turned out to be true; consumers spend more money when they actually understand the whole package.

Bacharach began working for a startup in Silicon Valley after graduating from college. She focused on the user experience analysis and product management, which meant she looked at customer service tickets and the analytical data around how customers were using the products. After the analysis, shed then make fixes and suggestions for new features and prioritizing those with the tech team.

She always knew she wanted to start her own company. Working at the startup provided her the opportunity to understand how all the different sectors of an organization operated. However, she had always been curious about the possibility of acting. She decided to move to Los Angeles to try to become a professional actress. I ended up deciding that the part of acting that I liked the most was auditioning and competing for the job and positioning and marketing myself, she explains. If you talk to any other actors, thats the part they hate the most. I realized that I should go to business school and focus on the entertainment industry because that's the part of it that really resonated with me.

FINDMINE is part of the SAP CX innovation ecosystem and is currently part of the latest SAP.iO ... [+] Foundry startup accelerator in San Francisco.

After graduating from business school, Bacharach entered the corporate world, where she worked on corporate strategy and product management. The company she worked for underwent a culture shift, which made it difficult working there. At that point, she had two options. She could either find another position with a different company or start her own business venture. I didn't really know what that thing was going to be, Bacharach expresses. I used that as kind of a forcing function to sit down with my list of ideas and decide, what the heck am I going to work on. I thought about it as a time off, like a six-month sabbatical to try to figure out what we're doing. Then I'm going to get invested in from my idea, and then I'm going to be back on the salary wagon and be able to make a living again. I thought it's all going to be so easy. That's what started the journey of me becoming an entrepreneur. It took two-and-a-half years before she earned a livable salary.

I worked for a startup, she states. I watched other people do it. I was a consultant to start off. I worked in corporate America. So, I saw the other side of the coin in the way that that world functions. I didn't want to do this for the long term. I like the early stages of stuff. In retrospect, I guess I did prepare myself, but I didn't know it while I was going through it. I just jumped in.

As Bacharach continues to expand FindMine with the ever-updating artificial intelligence technology, she focuses on the following essential steps to help her with each pivot:

Michelle Bacharach, Cofounder and CEO of FINDMINE, sat down with John Furrier at the Intel AI Lounge ... [+] at South by Southwest 2017 in Austin, Texas.

Don't worry about getting 100% right, Bacharach concludes. Dont look at people who are successful and say, oh, wow. They're so different from me. I can never do that. Look at them and say they're exactly the same as me. They're just two or three years ahead in terms of their learnings and findings. I have to do that same thing, but for whatever I want to start.

See the article here:

How This Cofounder Created An Artificial Intelligence Styling Company To Help Consumers Shop - Forbes

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews – Vox.com

Artificial intelligence is increasingly playing a role in companies hiring decisions. Algorithms help target ads about new positions, sort through resumes, and even analyze applicants facial expressions during video job interviews. But these systems are opaque, and we often have no idea how artificial intelligence-based systems are sorting, scoring, and ranking our applications.

Its not just that we dont know how these systems work. Artificial intelligence can also introduce bias and inaccuracy to the job application process, and because these algorithms largely operate in a black box, its not really possible to hold a company that uses a problematic or unfair tool accountable.

A new Illinois law one of the first of its kind in the US is supposed to provide job candidates a bit more insight into how these unregulated tools actually operate. But its unlikely the legislation will change much for applicants. Thats because it only applies to a limited type of AI, and it doesnt ask much of the companies deploying it.

Set to take effect January 1, 2020, the states Artificial Intelligence Video Interview Act has three primary requirements. First, companies must notify applicants that artificial intelligence will be used to consider applicants fitness for a position. Those companies must also explain how their AI works and what general types of characteristics it considers when evaluating candidates. In addition to requiring applicants consent to use AI, the law also includes two provisions meant to protect their privacy: It limits who can view an applicants recorded video interview to those whose expertise or technology is necessary and requires that companies delete any video that an applicant submits within a month of their request.

As Aaron Rieke, the managing director of the technology rights nonprofit Upturn, told Recode about the law, This is a pretty light touch on a small part of the hiring process. For one thing, the law only covers artificial intelligence used in videos, which constitutes a small share of the AI tools that can be used to assess job applicants. And the law doesnt guarantee that you can opt out of an AI-based review of your application and still be considered for a role (all the law says is that a company has to gain your consent before using AI; it doesnt require that hiring managers give you an alternative method).

Its hard to feel that that consent is going to be super meaningful if the alternative is that you get no shot at the job at all, said Rieke. He added that theres no guarantee that the consent and explanation the law requires will be useful; for instance, the explanation could be so broad and high-level that its not helpful.

If I were a lawyer for one of these vendors, I would say something like, Look, we use the video, including the audio language and visual content, to predict your performance for this position using tens of thousands of factors, said Rieke. If I was feeling really conservative, I might name a couple general categories of competency. (He also points out that the law doesnt define artificial intelligence, which means its difficult to tell what companies and what types of systems the law actually applies to).

Because the law is limited to AI thats used in video interviews, the company it most clearly applies to is Utah-based HireVue, a popular job interview platform that offers employers an algorithm-based analysis of recorded video interviews. Heres how it works: You answer pre-selected questions over your computer or phone camera. Then, an algorithm developed by HireVue analyzes how youve answered the questions, and sometimes even your facial expressions, to make predictions about your fit for a particular position.

HireVue says it already has about 100 clients using this artificial intelligence-based feature, including major companies like Unilever and Hilton.

Some candidates who have used HireVues system complain that the process is awkward and impersonal. But thats not the only problem. Algorithms are not inherently objective, and they reflect the data used to train them and the people that design them. That means they can inherit, and even amplify, societal biases, including racism and sexism. And even if an algorithm is explicitly instructed not to consider factors like a persons name, it can still learn proxies for protected identities (for instance, an algorithm could learn to discriminate against people who have gone to a womens college).

Facial recognition tech, in particular, has faced criticism for struggling to identify and characterize the faces of people with darker skin, women, and trans and non-binary people, among other minority groups. Critics also say that emotion (or affect) recognition technology in particular, which purports to make judgments about a persons emotions based on their facial expressions, is scientifically flawed. Thats why one research nonprofit, the AI Now Institute, called for the prohibition of such technology in high-stakes decision-making including job applicant vetting.

[W]hile youre being interviewed, theres a camera thats recording you, and its recording all of your micro facial expressions and all of the gestures youre using, the intonation of your voice, and then pattern matching those things that they can detect with their highest performers, AI Now Institute co-founder Kate Crawford told Recodes Kara Swisher earlier this year. [It] might sound like a good idea, but think about how youre basically just hiring people who look like the people you already have.

Even members of Congress are worried about that technology. In 2018, US Sens. Kamala Harris, Elizabeth Warren, and Patty Murray wrote to the Equal Employment Opportunity Commission, the federal agency charged with investigating employment discrimination, asking whether such facial analysis technology could violate anti-discrimination laws.

Despite being one of the first laws to regulate these tools, the Illinois law doesnt address concerns about bias. No federal legislation explicitly regulates these AI-based hiring systems. Instead, employment lawyers say such AI tools are generally subject to the Uniform Guidelines, employment discrimination standards created by several federal agencies back in 1978 (you can read more about that here).

The EEOC did not respond to Recodes multiple requests for comment.

Meanwhile, its not clear how, under Illinois new law, companies like HireVue will go about explaining the characteristics in applicants that its AI considers, given that the company claims that its algorithms can weigh up to tens of thousands of factors (it says it removes factors that are not predictive of job success).

The law also doesnt explain what an applicant might be entitled to if a company violates one of its provisions. Law firms advising clients on compliance have also noted that its not clear whether the law applies exclusively to businesses filling a position in Illinois, or just interviews that take place in the state. Neither Illinois State Sen. Iris Martinez nor Illinois Rep. Jaime M. Andrade, legislators who worked on the law, responded to a request for comment by the time of publication.

HireVues CEO Kevin Parker said in a blog post that the law entails very little, if any, change because its platform already complies with GDPRs principles of transparency, privacy, and the right to be forgotten. [W]e believe every job interview should be fair and objective, and that candidates should understand how theyre being evaluated. This is fair game, and its good for both candidates and companies, he wrote in August.

A spokesperson for HireVue said the decision to provide an alternative to an AI-based analysis is up to the company thats hiring, but argued that those alternatives can more time-consuming for candidates. If a candidate believes that a system is biased, the spokesperson said recourse options are the same as when a candidate believes that any part of the hiring process, or any individual interviewer, was unfairly biased against them.

Under the new law in Illinois, if you participate in a video interview that uses AI tech, you can ask for your footage to be deleted after the fact. But its worth noting that the law appears to still give the company enough time to train its model on the results of your job interview even if you think the final decision was problematic.

This gives these AI hiring companies room to continue to learn, says Rieke. Theyre going to delete the underlying video, but any learning or improvement to their systems they get to keep.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Link:

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews - Vox.com

The U.S. Patent and Trademark Office Takes on Artificial Intelligence – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

More here:

The U.S. Patent and Trademark Office Takes on Artificial Intelligence - JD Supra

Baidu looks to work with Indian institutions on AI – BusinessLine

Chinas largest search engine Baidu is looking to work with Indian institutions in future to make a better world through innovation, said Robin Li, Co-Founder, CEO and Chairman of Baidu.

India is one of the fastest growing smartphone markets in the world, and very large developing country, right next to China. Both the countries have been growing at a fast pace in the last few decades. For the next decade, we will be more optimistic, he said in a talk at IIT Madras tech fest, Shaastra 2020, titled Innovation in the age of Artificial Intelligence (AI).

Outside China, Baidu has presence in markets like Japan, Thailand and Egypt. However, the company's main product - search engine - is very much in China. Once in the age of AI, search will be very different from what is seen today. Once we transform the search into a different product, we will be ready to launch that internationally," he said without committing anything specific on foray into India.

Since founding Baidu in January 2000, Robin has led the company to be Chinas largest search engine with over 70 per cent market share. China is among the four countries globally - alongside the US, Russia and South Korea - to possess its own core search engine technology. Through innovations such as Box Computing to Baidus Open Data and Open App Platform, Robin has substantially advanced the theoretical framework of Chinas Internet sciences, propelling Baidu to be the vanguard of Chinas Internet industry. Baidu is also the largest AI platform company in China.

Li said that the previous decade was that of the Internet but the coming decade is that of intelligent economy with new modes of human-machine interaction. AI is transforming a lot of industries for higher efficiency and lower services. For instance, banks are finding it difficult to open branches but virtual assistant is used to open an account. Customers are more comfortable with virtual person than a real person.

In the education sector, every student can have a personal assistant while the pharma industry accelerate the pace of drug development with many start-ups already doing this. AI is also transforming transportation by helping reduction in traffic delays by 20-30 per cent, he said.

In China, using AI Baidu is helping in finding missing people and already 9,000 missing people have been found. AI can make one immortal. When everything about you can be digitised, computers can learn all about you, creating a digital copy of anyone, he said.

In the past ten years, people were dependent on mobile phones. But in the next ten years, people will be less dependent on the mobile phones because wherever they go there will be surrounding sensors, infrastructure that can answer your questions that concerns you. You may not be required to pull out your mobile phone every time to find an answer. This is the power of AI, he added.

Visit link:

Baidu looks to work with Indian institutions on AI - BusinessLine

Top Movies Of 2019 That Depicted Artificial Intelligence (AI) – Analytics India Magazine

Artificial intelligence (AI) is creating a great impact on the world by enabling computers to learn on their own. While in the real world AI is still focused on solving narrow problems, we see a whole different face of AI in the fictional world of science fiction movies which predominantly depict the rise of artificial general intelligence as a threat for human civilization. As a continuation of the trend, here we take a look at how artificial intelligence was depicted in 2019 movies.

A warning in advance the following listicle is filled with SPOILERS.

Terminator: Dark Fate the sixth film of the Terminator movie franchise, featured a super-intelligent Terminator named Gabriel designated as Rev-9, and was sent from the future to kill a young woman (Dani) who is set to become an important figure in the Human Resistance against Skynet. To fight the Rev-9 Terminator, the Human Resistance from the future also sends Grace, a robot soldier, back in time, to defend Dani. Grace is joined by Sarah Connor, and the now-obsolete ageing model of T-800 Terminator the original killer robot in the first movie (1984).

We all know Tony Stark as the man of advanced technology and when it comes to artificial intelligence, Stark has nothing short of state-of-the-art technology in Marvels cinematic universe. One such artificial intelligence was the Even Dead, Im The Hero (E.D.I.T.H.) which we witnessed in the 2019 movie Spider-Man: Far From Home. EDITH is an augmented reality security defence and artificial tactical intelligence system created by Tony Stark and was given to Peter Parker following Starks death. It is encompassed in a pair of sunglasses and gives its users access to Stark Industries global satellite network along with an array of missiles and drones.

I Am Mother is a post-apocalyptic movie which was released in 2019. The films plot is focused on a mother-daughter relationship where the mother is a robot designed to repopulate Earth. The robot mother takes care of her human child known as daughter who was born with artificial gestation. The duo stays in a secure bunker alone until another human woman arrives there. The daughter now faces a predicament of whom to trust- her robot mother or a fellow human who is asking the daughter to come with her.

Wandering Earth is another 2019 Chinese post-apocalyptic film with a plot involving Earths imminent crash into another planet and a group of family members and soldiers efforts to save it. The films artificial intelligence character is OSS, a computer system which was programmed to warn people in the earth space station. A significant subplot of the film is focused on protagonist Liu Peiqiangs struggle with MOSS which forced the space station to go into low energy mode during the crash as per its programming from the United Earth Government. In the end, Liu Peiqiang resists and ultimately sets MOSS on fire to help save the Earth.

James Camerons futuristic action epic for 2019 Alita: Battle Angel is a sci-fi action film which depicts the human civilization in an extremely advanced stage of transhumanism. The movie describes the dystopian future where robots and autonomous systems are extremely powerful. To elaborate, in one of the initial scenes of the movie, Ido attaches a cyborg body to a human brain he found (from another cyborg) and names her Alita after his deceased daughter, which is an epitome of advancements in AI and robotics.

Jexi is the only Hollywood rom-com movie depicting artificial intelligence in 2019. The movie features an AI-based operating system called Jexi with recognizable human behaviour and reminds the audience of the previously acclaimed film Her, which was released in 2014. But unlike Her, the movie goes the other way around depicting how the AI system becomes emotionally attached to its socially-awkward owner, Phil. The biggest shock of the comedy film is when Jexi the AI which lives inside Phils cellphone acts to control his life and even chases him angrily using a self-driving car.

Hi, AI is a German documentary which was released in early 2019. The documentary was based on Chucks relationship with Harmony an advanced humanoid robot. The films depiction of artificial intelligence is in sharp contrast with other fictional movies on AI. The documentary also depicts that even though human research is moving in the direction of creating advanced robots, interactions with robots still dont have the same depth as human conversations. The film won the Max Ophls Prize for best documentary for the year.

comments

Vishal Chawla is a senior tech journalist at Analytics India Magazine (AIM) and writes on the latest in the world of analytics, AI and other emerging technologies. Previously, he was a senior correspondent for IDG CIO and ComputerWorld. Write to him at vishal.chawla@analyticsindiamag.com

Read more:

Top Movies Of 2019 That Depicted Artificial Intelligence (AI) - Analytics India Magazine

Global Industrial Artificial Intelligence Market 2019 Research by Business Analysis, Growth Strategy and Industry Development to 2024 – Food &…

In the market research study namely, Global Industrial Artificial Intelligence Market 2019 by Manufacturers, Countries, Type and Application, Forecast to 2024, a comprehensive discussion on the market current flow and patterns, market share, sales volume, informative diagrams, industry development drivers, supply and demand, and other key aspects has been given. Its an important component for various stakeholders like traders, CEOs, buyers, providers, and others. The report provides guidance to exploring opportunities in the market by adding global and regional data as well as over top key players profiles. The global Industrial Artificial Intelligence market research file is an in-depth analysis that focuses on market development trends, opportunities, challenges, drivers, and limitations.

The market is analyzed by companies, and regions based on rate, value and gross. The report tracks the major market events such as product launches, technological developments, mergers and acquisitions, and the innovative business strategies acquired by key market players. The report contains types and applications appreciable consumption figures. The report highlights the markets current and conjecture development progress areas. It covers an in-depth analysis of the market size (revenue), market share, major market segments, and different geographic zones, the forecast for 2019-2024, and key market players.

DOWNLOAD FREE SAMPLE REPORT: https://www.fiormarkets.com/report/global-industrial-artificial-intelligence-market-2018-by-manufacturers-299826.html#sample

This report focuses on top manufacturers in global Industrial Artificial Intelligence market, with production, price, revenue, and market share for each manufacturer, covering: Intel Corporation, Siemens AG, IBM Corporation, Alphabet Inc, Microsoft Corporation, Cisco Systems, Inc, General Electric Company, Data RPM, Sight Machine, General Vision, Inc, Rockwell, Automation Inc, Mitsubishi Electric Corporation, Oracle Corporation, SAP SE

What Makes The Report Excellent?

The report offers information on market segmentation by type, application, and regions. The report specifies which product has the highest penetration, profit margins, and R&D status. The research covers the current market size of the global Industrial Artificial Intelligence market and its growth ratio based on history statistics 2014-2018. Each company profiled in the report is assessed for its market growth.

This Report Segments The Market:

Market by product type, 2014-2024: Type 1, Type 2, Others

Market by application, 2014-2024: Application 1, Application 2, Others

For a comprehensive understanding of market dynamics, the Industrial Artificial Intelligence market is analyzed across key geographies namely: North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), South America (Brazil, Argentina, Colombia etc.), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)

ACCESS FULL REPORT: https://www.fiormarkets.com/report/global-industrial-artificial-intelligence-market-2018-by-manufacturers-299826.html

Following Queries Are Answered In The Report:-

Moreover, the global Industrial Artificial Intelligence market report calculates the production and consumption rate. Upstream raw material suppliers and downstream buyers of this industry are explained. A competitive dashboard or company share analysis is also covered. It executes through various research findings, deals, retailers, merchants, conclusion, data sources, and appendix.

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

This post was originally published on Food and Beverage Herald

View post:

Global Industrial Artificial Intelligence Market 2019 Research by Business Analysis, Growth Strategy and Industry Development to 2024 - Food &...

Artificial intelligence takes scam to a whole new level – The Jackson Sun

RANDY HUTCHINSON, Better Business Bureau Published 12:54 a.m. CT Jan. 1, 2020

Imagine you wired hundreds of thousands of dollars somewhere based on a call from your boss, whose voice you recognized, only to find out you were talking to a machine and the money is lost. One company executive doesnt have to imagine it happening. He and his company were victims of what some experts say is one of the first cases of voice-mimicking software, a form of artificial intelligence (AI), being used in a scam.

In a common version of the Business Email Compromise scam, an employee in a companys accounting department wires money somewhere based on what appears to be a legitimate email from the CEO, CFO or other high-ranking executive. I wrote a column last year noting that reported losses to the scam had grown from $226 million in 2014 to $676 million in 2017. The FBI says losses doubled in 2018 to $1.8 billion and recommends making a phone call to verify the legitimacy of the request rather than relying on an email.

But now you may not even be able to trust voice instructions. The CEO of a British firm received what he thought was a call from the CEO of his parent company in Germany instructing him to wire $243,000 to the bank account of a supplier in Hungary. The call was actually originated by a crook using AI voice technology to mimic the bosss voice. The crooks moved the money from Hungary to Mexico to other locations.

An executive with the firms insurance company, which ultimately covered the loss, told The Wall Street Journal that the victim recognized the subtle German accent in his bosss voice and moreover that it carried the mans melody. The victim became suspicious when he received a follow-up call from the boss that originated in Austria requesting another payment be made. He didnt make that one, but the damage was already done.

Google says crooks may also synthesize speech to fool voice authentication systems or create forged audio recordings to defame public figures. It launched a challenge to researchers to develop countermeasures against spoofed speech.

Many companies are working on voice-synthesis software and some of it is available for free. The insurer thinks the crooks used commercially available software to steal the $243,000 from its client.

Many scams rely on victims letting their emotions outrun their common sense. An example is the Grandparent Scam, in which an elderly person receives a phone call purportedly from a grandchild in trouble and needing money. Victims have panicked and wired thousands of dollars before ultimately determining that the grandchild was safe and sound at home.

The crooks often invent some reason why the grandchilds voice may not sound right, such as the child having been in an accident or it being a poor connection. How much more successful might that scam be if the voice actually sounds like the grandchild? The executive who wired the $243,000 said he thought the request was strange, but the voice sounded so much like his boss that he felt he had to comply.

The BBB recommends companies install additional verification steps for wiring money, including calling the requestor back on a number known to be authentic.

Randy Hutchinson is the president of the Better Business Bureau of the Mid-South. Reach him at 901-757-8607.

Read or Share this story: https://www.jacksonsun.com/story/news/2020/01/01/artificial-intelligence-takes-scam-whole-new-level/2719833001/

Continued here:

Artificial intelligence takes scam to a whole new level - The Jackson Sun

Shocking ways AI technology will revolutionise every day industries in YOUR lifetime – Express.co.uk

Science fiction has helped shaped societys understanding and expectation of advanced AI technology in the future. However, Shadow Robot Company Director Rich Walker argued artificial intelligence technology could be used in industries we would not expect. While speaking to Express.co.uk, he explained that a new A.I tech could be introduced to be used in sectors such as the estate agency or booking services.

He added massive leaps in AI capabilities in recent years had raised expectations of what people believe artificial intelligence can be used for.

He said: AI technology has really been promising a lot for a very long time.

In the last few years we have really started to see some very impressive and surprising successes.

Self-driving cars are starting to be something that has gone from a complete fairy pipe-dream to the question of when are we going to see a self-driving car, because surely we can get one now.

DON'T MISS:AI WILL lead to human extinction if one crucial change isnt made

I think what will happen in the next couple of year is we will see some areas that we werent expecting suddenly being done by AI

Everyone will be like, yes, of course, we could have artificial intelligence in this industry.

Maybe it will be an estate agency or train booking.

Something that is a complicated annoying problem.

Link:

Shocking ways AI technology will revolutionise every day industries in YOUR lifetime - Express.co.uk

THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue – Business Insider India

This is a preview of a research report from Business Insider Intelligence. To learn more about Business Insider Intelligence, click here. Current subscribers can log in and read the report here.

New technology is disrupting legacy automakers' business models and dampening consumer demand for purchasing vehicles. Tech-mediated models of transportation - like ride-hailing, for instance - are presenting would-be car owners with alternatives to purchasing vehicles.

In fact, a study by ride-hailing giant Lyft found that in 2017, almost 250,000 of its passengers sold their own vehicle or abandoned the idea of replacing their current car due to the availability of ride-hailing services.

This will enable automakers to take advantage of what will amount to billions of dollars in added value. For example, self-driving technology will present a $556 billion market by 2026, growing at a 39% CAGR from $54 billion in 2019, per Allied Market Research.

But firms face some major hurdles when integrating AI into their operations. Many companies are not presently equipped to begin producing AI-based solutions, which often require a specialized workforce, new infrastructure, and updated security protocol. As such, it's unsurprising that the main barriers to AI adoption are high costs, lack of talent, and lack of trust. Automakers must overcome these barriers to succeed with AI-based projects.

In The AI In Transportation Report, Business Insider Intelligence will discuss the forces driving transportation firms to AI, the market value of the technology across segments of the industry, and the potential barriers to its adoption. We will also show how some of the leading companies in the space have successfully overcome those barriers and are using AI to adapt to the digital age.

Here are some key takeaways from the report:

In full, the report:

The choice is yours. But however you decide to acquire this report, you've given yourself a powerful advantage in your understanding of AI in transportation.

Go here to see the original:

THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue - Business Insider India

IIT Hyderabad to collaborate with Telangana government on artificial intelligence – India Today

IIT Hyderabad will also assist Telangana State in developing a strategy for artificial intelligence.

Indian Institute of Technology (IIT) Hyderabad is going to collaborate with the Government of Telangana for research on artificial intelligence. The institute is partnering with the Information Technology, Electronics and Communication (ITE&C) Department, Government of Telangana, for building/identifying quality datasets, along with third parties such as the industry.

They will also work on education and training to prepare and deliver content and curriculum on AI courses to be delivered to college students along with industry participants.

The MoU was signed by BS Murty, Director, IIT Hyderabad, and Jayesh Ranjan, IAS, Principal Secretary to Government of Telangana, Departments of Information Technology (IT) and Industries and Commerce (I&C) during an event held on January 2 as part of '2020: Declaring Telangana's Year of AI' initiative. Several other MoUs with other organizations were also signed by the Government of Telangana during this occasion.

The Telangana government declared 2020 as the 'Year of Artificial Intelligence' with the objective of promoting its use in various sectors ranging from urban transportation and healthcare to agriculture and others. The ITE&C Department aims to develop the ecosystem for the industry and to leverage emerging technologies for improving service delivery as part of this collaboration.

IIT Hyderabad will also assist the Telangana State in developing a strategy for AI/HPC (Artificial Intelligence / High-Performance Computing) infrastructure for various state needs and provide technology mentorship to identified partners for exploring and building AI PoCs (Point of Contacts).

The Telangana State Information Technology, Electronics and Communication Department's (ITE&C Department) is a Telangana Government department with a mandate to promote the use of Information Technology (IT) and act as a promoter/facilitator in the field of Information Technology in the state and build an IT driven continuum of Government services.

The vision of the ITE&C department is to leverage IT not only for effective and efficient governance, but also for sustainable economic development and inclusive social development. Its mission is to facilitate collaborative and innovative IT solutions, and to plan for the future growth while protecting and enhancing the quality of life.

Read: IIT Hyderabad researcher finds people from rural Bihar migrate to urban areas but do not settle

Also read: IIT Hyderabad researchers unravel working of protein that repairs damaged DNA

Read the original post:

IIT Hyderabad to collaborate with Telangana government on artificial intelligence - India Today

The 2020 ‘Super Bowl of Astronomy’ Kicks Off in Hawaii – Space.com

Thousands of scientists from around the world are converging on Hawaii this week to unveil the latest discoveries about the universe at the so-called "Super Bowl of astronomy." If the event, the 235th meeting of the American Astronomical Society, had a stadium, it would be packed.

"This will be the biggest AAS meeting in history," AAS spokesperson Rick Feinberg told Space.com in an email.

More than 3,500 scientists are expected to attend the four-day conference in Honolulu, Hawaii, Feinberg said. The first press conferences and talks begin today (Jan. 5). They'll end on Wednesday (Jan. 8), with observatory tours and other presentations scheduled throughout the week.

NASA, as expected, will showcase its latest space findings at the conference, including the agency's recent exoplanet discoveries by the TESS space telescope and the Hubble Space Telescope, which celebrates its 30th anniversary in April.

"NASA researchers will present new findings on a wide range of astrophysics and other space science topics at the 235th Meeting of the American Astronomical Society, Saturday, Jan. 4, through Wednesday, Jan. 8, in Honolulu," NASA officials said in a statement. "Agency scientists and their colleagues who use NASA research capabilities also will present noteworthy findings during scientific sessions that are open to registered media."

The AAS and NASA will webcast press conferences from the conference daily from Sunday to Wednesday. There are two press conferences most days (there are three today) and they can be watched live on the AAS website here as well as on the NASA Live website here.

The briefings are scheduled for 10:15 a.m. HST (3:15 EST/2015 GMT) and 2:15 p.m. EST (7:15 p.m. EST/0015 GMT). The extra briefing on Sunday is at 12:45 p.m. HST (5:45 p.m. EST/2245 GMT).

You can find the list of the press conferences here, including what scientists will discuss in each session over the next four days.

The role of Hawaii in astronomy will take center stage at this year's AAS meeting.

"The main new feature of this meeting is our major effort to bring the astronomical community and the local community together as much as possible to discuss the future of astronomy in Hawaii," Feinberg said.

Hawaii has long been a focal point for astronomy. The Keck Observatory, which has the largest active optical telescopes on Earth, and other observatories sit atop the volcano Mauna Kea and an even larger telescope, the Thirty Meter Telescope, is planned to be built at the site.

But construction of the Thirty Meter Telescope (TMT) has been stalled due to ongoing protests by indigenous groups that consider Mauna Kea sacred. The demonstrations stepped up in 2019.

"TMT is committed to finding a peaceful way forward on Maunakea for all," the builders of the new telescope wrote in a Dec. 20 update.

"We are sensitive to the ongoing struggles of indigenous populations around the world, and we will continue to support conversations around TMT and the larger issues for which it has become a flashpoint," Gordon Squires, TMT VP for External Affairs, said in the statement. "We are participating in private conversations with community leaders, but these conversations will take time."

Email Tariq Malik attmalik@space.comor follow him@tariqjmalik. Follow us@Spacedotcom, Facebook and Instagram.

Excerpt from:

The 2020 'Super Bowl of Astronomy' Kicks Off in Hawaii - Space.com

Crater found from asteroid that covered 10% of Earth’s surface in deb – Astronomy Magazine

Most massive meteorites struck Earth so long ago their craters have almost completely eroded, Sieh says. But this impact was unusual in that it was huge and recent enough that the site where it hit should be identifiable.But with rocks from the impact spread across the world, zeroing in on the location proved difficult.

The site eluded geochemists for decades, but Sieh decided to take a new approach and look at satellite imagery from parts of the world where the meteorite might have hit. In the Bolaven Plateau in southern Laos, he found an expanse of flat, shallow rock formed from hardened lava, just thick enough to obscure a crater of this size.

In-person excavations found the lava dated to around the same time as the impact, while surrounding sediments were older. Additional gravity measurements also hinted at a crater below. Altogether it's enough for Sieh to be confident he's finally located ancient ground zero.

With the help of Sieh and his teams find, researchers now have a slightly clearer sense of what must have happened after the asteroid hit. Roughly a mile and a quarter wide, the rock would have opened a hole larger than San Francisco in a span of seconds.

The rock's speed and force would have been enough to send pillow-sized boulders careening through the air at almost 1,500 feet per second. Sitting on the perimeter of the suspected impact site, these rocks are a tell-tale sign of a meteorite impact. It would not have been a healthy thing to be on the receiving end of that, Sieh says.

For now, Sieh wants to focus on some of the ashy material surrounding the meteor debris. The impact would have incinerated all plant and animal life within 300 miles of the impact site, and Sieh is curious how that kind of settling dust would impact all of us today. The odds of such an impact are extremely low, but still fascinate Sieh. "I've never worked on meteorites before, but I got sucked into this with my curiosity," he says.

As for drilling down through the rock to confirm that this is in fact the site? "I'm 98 percent convinced we found it, but Id be supportive of anyone who wanted to," he says.

View post:

Crater found from asteroid that covered 10% of Earth's surface in deb - Astronomy Magazine

East Haven Coffee Shop To Host Astronomy Night on January 24th – East Haven, CT Patch

EAST HAVEN, CT - Break out the binoculars and take out the telescope The Astronomical Society of New Haven is bringing their wide range of viewing equipment and knowledge to East Haven's organic cafe One World Roasters on the evening of Friday January 24th at 6:30 p.m. for a winter sky viewing session open to all.

"The mission of our society is to bring interest to the general public about the topic of astronomy," says Al Washburn, member at large and former president of the Astronomical Society of New Haven.

The retired North Branford High School science teacher of 38 years speaks with an inexhaustible passion of the special sights guests can expect to see on this particular night. "One of the best objects to see in the sky is in the cold winter months and hopefully it will be a good, clear evening and we'll be taking a look at that," he says.

Washburn speaks of the Orion Nebula, a giant hydrogen gas cloud and one of the most photographed objects in the sky. "You can actually see it with your naked eye if you know where to look for it, but collecting more light from it with the mirror of a telescope will allow you to see the magnificent Orion Nebula," he says.

The evening's other astronomical attractions include an ideal view of the brightest star in the night sky, Sirius, a potential glimpse of the Andromeda Galaxy, and a great number of open star clusters, which Washburn describes as "diamonds and sprinkled on a black velvet napkin."

As for the Society's choice of location, Washburn concedes that the proximity to coffee shop is a definite perk, but also mentions the unique elevation of the viewing site. "It has a nice low Eastern Horizon so we will be pointing our telescopes mostly to the East and South East to see the constellations of the wintertime as they take their positions above our skies," he says.

If you are new to the world of astronomy, telescopes, and viewing sessions, fear not, as the members of the society will be happy to assist all first time attendees. "People can expect just to walk over to a particular telescope, most everybody says "Hi, welcome, it's good to see you," and the person running the telescope will say what is inside the view so that they'll know what to look for when they look inside," Washburn says.

The former Astronomical Society president does have one request for first time stargazers "I would ask those who are arriving to bring a pair of binoculars," Washburn says. "There is an excellent star cluster called M 45 (also known as Pleiades or The Seven Sisters) and it is easily seen with the naked eye but in a pair of very simple binoculars it is magnificent," he adds.

Society members also encourage new telescope owners to bring their equipment for friendly tutorials and instruction on how to properly use their viewing tools.

One World Cafe will open at 6:30 p.m. on Friday January 24th and the viewing will begin at 7:00. "Astronomy is a fun science and everyone has a front row seat and you can do it with a pair of binoculars," Washburn says.

Continue reading here:

East Haven Coffee Shop To Host Astronomy Night on January 24th - East Haven, CT Patch