12345...102030...


Why Neuro-Symbolic Artificial Intelligence Is The AI Of The Future – Digital Trends

Picture a tray. On the tray is an assortment of shapes: Some cubes, others spheres. The shapes are made from a variety of different materials and represent an assortment of sizes. In total there are, perhaps, eight objects. My question: Looking at the objects, are there an equal number of large things and metal spheres?

Its not a trick question. The fact that it sounds as if it is is proof positive of just how simple it actually is. Its the kind of question that a preschooler could most likely answer with ease. But its next to impossible for todays state-of-the-art neural networks. This needs to change. And it needs to happen by reinventing artificial intelligence as we know it.

Thats not my opinion; its the opinion of David Cox, director of the MIT-IBM Watson A.I. Lab in Cambridge, MA. In a previous life, Cox was a professor at Harvard University, where his team used insights from neuroscience to help build better brain-inspired machine learning computer systems. In his current role at IBM, he oversees work on the companys Watson A.I. platform.Watson, for those who dont know, was the A.I. which famously defeated two of the top game show players in history at TV quiz show Jeopardy. Watson also happens to be a primarily machine-learning system, trained using masses of data as opposed to human-derived rules.

So when Cox says that the world needs to rethink A.I. as it heads into a new decade, it sounds kind of strange. After all, the 2010s has been arguably the most successful ten-year in A.I. history: A period in which breakthroughs happen seemingly weekly, and with no frosty hint of an A.I. winter in sight.This is exactly why he thinks A.I. needs to change, however. And his suggestion for that change, a currently obscure term called neuro-symbolic A.I., could well become one of those phrases were intimately acquainted with by the time the 2020s come to an end.

Neuro-symbolic A.I. is not, strictly speaking, a totally new way of doing A.I. Its a combination of two existing approaches to building thinking machines; ones which were once pitted against each as mortal enemies.

The symbolic part of the name refers to the first mainstream approach to creating artificial intelligence. From the 1950s through the 1980s, symbolic A.I. ruled supreme. To a symbolic A.I. researcher, intelligence is based on humans ability to understand the world around them by forming internal symbolic representations. They then create rules for dealing with these concepts, and these rules can be formalized in a way that captures everyday knowledge.

If the brain is analogous to a computer, this means that every situation we encounter relies on us running an internal computer program which explains, step by step, how to carry out an operation, based entirely on logic. Provided that this is the case, symbolic A.I. researchers believe that those same rules about the organization of the world could be discovered and then codified, in the form of an algorithm, for a computer to carry out.

Symbolic A.I. resulted in some pretty impressive demonstrations. For example, in 1964 the computer scientist Bertram Raphael developed a system called SIR, standing for Semantic Information Retrieval. SIR was a computational reasoning system that was seemingly able to learn relationships between objects in a way that resembled real intelligence. If you were to tell it that, for instance, John is a boy; a boy is a person; a person has two hands; a hand has five fingers, then SIR would answer the question How many fingers does John have? with the correct number 10.

there are concerning cracks in the wall that are starting to show.

Computer systems based on symbolic A.I. hit the height of their powers (and their decline) in the 1980s. This was the decade of the so-called expert system which attempted to use rule-based systems to solve real-world problems, such as helping organic chemists identify unknown organic molecules or assisting doctors in recommending the right dose of antibiotics for infections.

The underlying concept of these expert systems was solid. But they had problems. The systems were expensive, required constant updating, and, worst of all, could actually become less accurate the more rules were incorporated.

The neuro part of neuro-symbolic A.I. refers to deep learning neural networks. Neural nets are the brain-inspired type of computation which has driven many of the A.I. breakthroughs seen over the past decade. A.I. that can drive cars? Neural nets. A.I. which can translate text into dozens of different languages? Neural nets. A.I. which helps the smart speaker in your home to understand your voice? Neural nets are the technology to thank.

Neural networks work differently to symbolic A.I. because theyre data-driven, rather than rule-based. To explain something to a symbolic A.I. system means explicitly providing it with every bit of information it needs to be able to make a correct identification. As an analogy, imagine sending someone to pick up your mom from the bus station, but having to describe her by providing a set of rules that would let your friend pick her out from the crowd. To train a neural network to do it, you simply show it thousands of pictures of the object in question. Once it gets smart enough, not only will it be able to recognize that object; it can make up its own similar objects that have never actually existed in the real world.

For sure, deep learning has enabled amazing advances, David Cox told Digital Trends. At the same time, there are concerning cracks in the wall that are starting to show.

One of these so-called cracks relies on exactly the thing that has made todays neural networks so powerful: data. Just like a human, a neural network learns based on examples. But while a human might only need to see one or two training examples of an object to remember it correctly, an A.I. will require many, many more. Accuracy depends on having large amounts of annotated data with which it can learn each new task.

That makes them less good at statistically rare black swan problems. A black swan event, popularized by Nassim Nicholas Taleb, is a corner case that is statistically rare. Many of our deep learning solutions today as amazing as they are are kind of 80-20 solutions, Cox continued. Theyll get 80% of cases right, but if those corner cases matter, theyll tend to fall down. If you see an object that doesnt normally belong [in a certain place], or an object at an orientation thats slightly weird, even amazing systems will fall down.

Before he joined IBM, Cox co-founded a company, Perceptive Automata, that developed software for self-driving cars. The team had a Slack channel in which they posted funny images they had stumbled across during the course of data collection. One of them, taken at an intersection, showed a traffic light on fire. Its one of those cases that you might never see in your lifetime, Cox said. I dont know if Waymo and Tesla have images of traffic lights on fire in the datasets they use to train their neural networks, but Im willing to bet if they have any, theyll only have a very few.

Its one thing for a corner case to be something thats insignificant because it rarely happens and doesnt matter all that much when it does. Getting a bad restaurant recommendation might not be ideal, but its probably not going to be enough to even ruin your day. So long as the previous 99 recommendations the system made are good, theres no real cause for frustration. A self-driving car failing to respond properly at an intersection because of a burning traffic light or a horse-drawn carriage could do a lot more than ruin your day. It might be unlikely to happen, but if it does we want to know that the system is designed to be able to cope with it.

If you have the ability to reason and extrapolate beyond what weve seen before, we can deal with these scenarios, Cox explained. We know that humans can do that. If I see a traffic light on fire, I can bring a lot of knowledge to bear. I know, for example, that the light is not going to tell me whether I should stop or go. I know I need to be careful because [drivers around me will be confused.] I know that drivers coming the other way may be behaving differently because their light might be working. I can reason a plan of action that will take me where I need to go. In those kinds of safety-critical, mission-critical settings, thats somewhere I dont think that deep learning is serving us perfectly well yet. Thats why we need additional solutions.

The idea of neuro-symbolic A.I. is to bring together these approaches to combine both learning and logic. Neural networks will help make symbolic A.I. systems smarter by breaking the world into symbols, rather than relying on human programmers to do it for them. Meanwhile, symbolic A.I. algorithms will help incorporate common sense reasoning and domain knowledge into deep learning. The results could lead to significant advances in A.I. systems tackling complex tasks, relating to everything from self-driving cars to natural language processing. And all while requiring much less data for training.

Neural networks and symbolic ideas are really wonderfully complementary to each other, Cox said. Because neural networks give you the answers for getting from the messiness of the real world to a symbolic representation of the world, finding all the correlations within images. Once youve got that symbolic representation, you can do some pretty magical things in terms of reasoning.

For instance, in the shape example I started this article with, a neuro-symbolic system would use a neural networks pattern recognition capabilities to identify objects. Then it would rely on symbolic A.I. to apply logic and semantic reasoning to uncover new relationships. Such systems have already been proven to work effectively.

Its not just corner cases where this would be useful, either. Increasingly, it is important that A.I. systems are explainable when required. A neural network can carry out certain tasks exceptionally well, but much of its inner reasoning is black boxed, rendered inscrutable to those who want to know how it made its decision. Again, this doesnt matter so much if its a bot that recommends the wrong track on Spotify. But if youve been denied a bank loan, rejected from a job application, or someone has been injured in an incident involving an autonomous car, youd better be able to explain why certain recommendations have been made. Thats where neuro-symbolic A.I. could come in.

A few decades ago, the worlds of symbolic A.I. and neural networks were at odds with one another. The renowned figures who championed the approaches not only believed that their approach was right; they believed that this meant the other approach was wrong. They werent necessarily incorrect to do so. Competing to solve the same problems, and with limited funding to go around, both schools of A.I. appeared fundamentally opposed to each other. Today, it seems like the opposite could turn out to be true.

Its really fascinating to see the younger generation, Cox said. Most of my team are relatively junior people: fresh, excited, fairly recently out of their Ph.Ds. They just dont have any of that history. They just dont care [about the two approaches being pitted against each other] and not caring is really powerful because it opens you up and gets rid of those prejudices. Theyre happy to explore intersections They just want to do something cool with A.I.

Should all go according to plan, all of us will benefit from the results.

Read more from the original source:

Why Neuro-Symbolic Artificial Intelligence Is The AI Of The Future - Digital Trends

Welcome to the roaring 2020s, the artificial intelligence decade – GreenBiz

This article first appeared in GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here.

Ive long believed the most profound technology innovations are ones we take for granted on a day-to-day basis until "suddenly" they are part of our daily existence, such as computer-aided navigation or camera-endowed smartphones. The astounding complexity of whats "inside" these inventions is what makes them seem simple.

Perhaps thats why Im so fascinated by the intersection of artificial intelligence and sustainability: the applications being made possible by breakthroughs in machine learning, image recognition, analytics and sensors are profoundly practical. In many instances, the combination of these technologies completely could transform familiar systems and approaches used by the environmental and sustainability communities, making them far smarter with far less human intervention.

Take the camera trap, a pretty common technique used to study wildlife habits and biodiversity and one that has been supported by an array of big-name tech companies. Except what researcher has the time or bandwidth to analyze thousands, let alone millions, of images? Enter systems such as Wildlife Insights, a collaboration between Google Earth and seven organizations, led by Conservation International.

Wildlife Insights is, quite simply, the largest database of public camera-trap images in the world it includes 4.5 million photos that have been analyzed and mapped with AI for characteristics such as country, year, species and so forth. Scientists can use it to upload their own trap photos, visualize territories and gather insights about species health.

Heres the jaw-dropper: This AI-endowed database can analyze 3.6 million photos in an hour, compared with the 300 to 1,000 images that you or I can handle. Depending on the species, the accuracy of identification is between 80 and 98.6 percent. Plus, the system automatically discounts shots where no animals are present: no more blanks.

Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world.

At the same time, we are certainly right to be cautious about the potential side effects of AI. That theme comes through loud and clear in five AI predictions published by IBM in mid-December. Two resonate with me the most: first, the idea that AI will be instrumental in building trust and ensuring that data is governed in ways that are secure and reliable; and second, that before we get too excited about all the cool things AI might be able to do, we need to make sure that it doesnt exacerbate the problem. That means spending more time focused on ways to make the data centers behind AI applications less energy-intensive and less-impactful from a materials standpoint.

From an ethical standpoint, I also have two big concerns: first, that sufficient energy is put into ensuring that the data behind the AI predictions we will come to rely on more heavily isnt flawed or biased. That means spending time to make sure a diverse set of human perspectives are represented and that the numbers are right in the first place. And second, we must view these systems as part of the overall solution, not replacements for human workers.

As IBMs vice president of AI research, Sriram Raghavan, puts it: "New research from the MIT-IBM Watson AI Lab shows that AI will increasingly help us with tasks such as scheduling, but will have a less direct impact on jobs that require skills such as design expertise and industrial strategy. Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world; employers have to start adapting job roles, while employees should focus on expanding their skills."

Projections by tech market research firm IDC suggest that spending on AI systems could reach $97.9 billion in 2023 thats 2.5 times the estimated $37.5 billion spent in 2019. Why now? Its a combination of geeky factors: faster chips; better cameras; massive cloud data-processing services. Plus, did I mention that we dont really have time to waste?

Where will AI-enabled applications really make a difference for environmental and corporate sustainability? Here are five areas where I believe AI will have an especially dramatic impact over the next decade.

For more inspiration and background on the possibilities, I suggest this primer (PDF) published by the World Economic Forum. And, consider this your open invitation to alert me about the intriguing applications of AI youre seeing in your own work.

Here is the original post:

Welcome to the roaring 2020s, the artificial intelligence decade - GreenBiz

Top five projections in Artificial Intelligence for 2020 – Economic Times

There has been good and bad news of AI in the year 2019. Of course, Bad News always get preference and catches peoples minds. Some of the popular bad news in AI has been related to Fake news Generation, Creating Porn fakes from Social Media images, Autonomous Vehicle killing a pedestrian, AI systems attacking a production facility, Data biases creating problems in AI applications. In the good news we have seen innovative Healthcare related applications being deployed in Hospitals, AI tools helping specially abled people, Robots being used in increasing set of domains, AI assistants and smart devices guiding people in day to day queries and chores. The speed of evolution, adoption and research in AI is accelerating. It will be important and essential for the society to know what lies ahead on the road so that we are prepared for the worst and hopeful for the best.

AI will come out of the Data Conundrum

Although one of the main drivers for the success story of AI in the last decade has been the availability of exponentially increasing data; now the data itself is becoming one of the key barriers in developing futuristic applications using AI. Advancements in the study of human intelligence also show that our species is very effective in adapting to unseen situations which contrasts with the current capabilities of AI.

In the past year, there has been a significant activity in AI research to tackle this issue. Specific progress has been made inReinforcement Learning Techniques that take care of the limitations of supervised learning methods requiring huge amount of data. Deep Minds recent achievement is on top of the success stories in this domain. The Star Craft-II system developed by them taking the throne of the Grandmaster is a game changer and an indicator of the tremendous progress and potential of this technology.Generating data through Simulation in past year and it will grow at a much faster pace in 2020

For many complex applications, it is almost impossible to have data of all variety related to different phenomenon of that problem. For example, Autonomous vehicles, Healthcare, Space research, Prediction of natural disasters, Video Generation are some of the areas where high quality simulation data will be much more effective. In most of these cases real historical data will be too limited to predict new situations that can occur in the future. E.g. Space research is coming out with new inventions every day and nullifying old assumptions; in such a scenario, any AI application using historical data is bound to fail. However, Simulations of new possibilities with high precision software applications can alter the direction of AI applications in these domains.

Even in the cases where the applications are starving for additional training with local data; Public and Private organizations are coming forward to share and collaborate for data requirements. The Leaders are becoming more conversant with the requirements of AI and the mindset is changing.

All these factors combined will have a dramatic effect in 2020 and will bring the dependency of AI on data to a lower level. AI will come out of the Data Cage.

Machine Generated Content with Artificial Intelligence takes over Crowd Intelligence

We have seen the prototypes and demonstrations of content generating Robots in the form of user reviews, news stories, Celebrity images, Funny Videos, Music Compositions, Short stories and Artistic paintings. This is going to become sophisticated with the advancement in self-Supervised learning led by NVIDIA, Google and Microsoft; which are pushing the boundaries to new frontiers.Most popular Online Retail stores, Food Portals, Hotel & Travel aggregators etc. are based on customer reviews. Till now, these were written by real customers and real humans. Most of us were putting our faith in the crowd and take their reviews at face value. This has become a key component in driving new sales in different business segments. So, we were relying on crowd intelligence. But with these new content generation Robots all such businesses will be flooded with AI generated reviews and it will be very easy to fool the Customer.

Another Critical area is the opinion formation regarding various news, events and issues concerning the society. Social media, online campaigns, Messaging through different mobile apps has become a key resource to build the public sentiment on important issues. This is another area which is facing an immediate danger of Artificial machines taking over human beings to form opinion.Next year this trend will consolidate and there will be a visible effect on democratic Governments. AI may become a key driver and a primary campaigner for elections. Those organizations, Individuals or parties having AI supremacy will be able to win the elections and drive the world.

The world will speak and understand one Language: The Language of AI

With the tremendous success and improvements brought by BERT and GPT-2, the Language translation is coming of age. People talking to anyone outside their community will be talking through Language of AI Middleware. In 2019, we have already seen devices which can help you converse with people speaking other languages. The offerings are going to become more qualitative and inclusive. More and more languages are being added with an amazing pace in such conversational devices. Impact of such technologies coming in mass usage will result in plethora of applications being developed resulting in great impact on business and society. Movement of people, skills and knowledge across borders with different language speakers will become more common. This will also bring transformation in the cinema, performing arts and travel industries. This phenomenon will also affect Higher education sector and affect different countries in diverse ways. It can prove to be an economic bonanza, or a disaster based on the way the countries plan and embrace the changes. Proactive leadership that understands the future impacts of these technologies will be crucial to bring a considered transition of society and happy future of these countries.

AI Boost for the powerful and AI poison for the under-privileged Groups

AI is working in the same way for the powerful as Industrialization and Digitalization. People with resources are deploying and utilizing new age technologies to their advantage. They have resources to invest in new applications and become first adopters of technology. The power of Artificial Intelligence is being combined to optimize Manufacturing and Energy Production. It is also being used to increase the efficiency of distribution networks, delivery chain, connectivity. Every big business is at progressing to further increase the AI adoption including Airlines, Shipping Corporations, Mining Companies and Infrastructure Conglomerates. Eventually AI is further intensifying the divide between the haves and have-nots. Common people are becoming pawns in the hands of AI applications. Their privacy is under attack. As the cost of labor is devalued due to automation and new technologies, the wealth will be owned by a tiny percentage of the people in the world.

Genuine Voices, Groups and organizations should strive for development of Technology with a Human face. Already, the UN and other groups are working for Sustainable development Goals. Now, it is time that a proper framework is put in place involving all stakeholders so that the pace and direction of technology remains under the control of humanity. It will involve developing comprehensive moral, ethical, legal and societal ecosystems governing the use, development and deployment of AI tools, technologies and applications.

Crazy increase in Defense Budgets for AI enabled Weaponization

Few countries in the World are already in the advanced stages of developing Lethal Autonomous Warfare Systems. Sea Hunter, An Autonomous Unmanned Surface Vehicle for Anti-submarine Warfare is already operational. China is in the final stages of deploying army of Micro Swarm Drone Systems which can launch suicidal incognito attacks on adversary infrastructure. Other Permanent Members of the Security Council are working on Holistic Warfare Systems which are fully integrated with other functions of the Government. With complex set of adversaries in place, Israel is working to use AI as a force multiplier and to take fast decisions in the prevailing nebulosity of hybridity. AI also helps greatly in asymmetric warfare.

AI has unlimited potential to launch Cybersecurity attacks of complex nature which will require adversaries to have superior AI capabilities to counter. As major Financial systems of the world are online including banks and stock markets, they may become easy targets of Future AI systems for blackmailing and threatening Govts.In recent years we have seen significant increase in AI related defense budgets to help AI enabled weaponization. This is going to further accelerate in the coming year(s). Precision attack on individuals, distribution and infrastructure networks of the countries will be enhanced by AI. We have already seen a precision attack powered by US and Israeli Cooperation in Iraq, which resulted in killing of Irans Top Commander.

With all these trends in pipeline it will be vital for the organizations, Countries and the world to set their AI strategy in place. To have competent people who are expert in AI will be indispensable and essential for the survival in this new decade. We will need people who understand both the human and machine operated ecosystems and can make emotionally sound judgements which are in the benefit of humanity.

DISCLAIMER : Views expressed above are the author's own.

Read the original post:

Top five projections in Artificial Intelligence for 2020 - Economic Times

A reality check on artificial intelligence: Can it match the hype? – PhillyVoice.com

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Dr. Jesse Ehrenfeld, who chairs the physician groups board of trustees.

In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Dr. Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation which is not affiliated with Kaiser Permanente.

See the rest here:

A reality check on artificial intelligence: Can it match the hype? - PhillyVoice.com

Can medical artificial intelligence live up to the hype? – Los Angeles Times

Health products powered by artificial intelligence are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, Topol said. Even the Food and Drug Administration which has approved more than 40 AI products in the last five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Some doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain.

In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech start-ups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval. None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and coauthor of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the last decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said.

The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

Some AI devices are more carefully tested than others. An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the test, sold as IDx-DR, right, said Dr. Michael Abramoff, the companys founder and executive chairman.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said coauthor Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said.

Google had no comment in response to Jhas conclusions.

This story was written for Kaiser Health News, an editorially independent publication of the Kaiser Family Foundation.

More:

Can medical artificial intelligence live up to the hype? - Los Angeles Times

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews – Vox.com

Artificial intelligence is increasingly playing a role in companies hiring decisions. Algorithms help target ads about new positions, sort through resumes, and even analyze applicants facial expressions during video job interviews. But these systems are opaque, and we often have no idea how artificial intelligence-based systems are sorting, scoring, and ranking our applications.

Its not just that we dont know how these systems work. Artificial intelligence can also introduce bias and inaccuracy to the job application process, and because these algorithms largely operate in a black box, its not really possible to hold a company that uses a problematic or unfair tool accountable.

A new Illinois law one of the first of its kind in the US is supposed to provide job candidates a bit more insight into how these unregulated tools actually operate. But its unlikely the legislation will change much for applicants. Thats because it only applies to a limited type of AI, and it doesnt ask much of the companies deploying it.

Set to take effect January 1, 2020, the states Artificial Intelligence Video Interview Act has three primary requirements. First, companies must notify applicants that artificial intelligence will be used to consider applicants fitness for a position. Those companies must also explain how their AI works and what general types of characteristics it considers when evaluating candidates. In addition to requiring applicants consent to use AI, the law also includes two provisions meant to protect their privacy: It limits who can view an applicants recorded video interview to those whose expertise or technology is necessary and requires that companies delete any video that an applicant submits within a month of their request.

As Aaron Rieke, the managing director of the technology rights nonprofit Upturn, told Recode about the law, This is a pretty light touch on a small part of the hiring process. For one thing, the law only covers artificial intelligence used in videos, which constitutes a small share of the AI tools that can be used to assess job applicants. And the law doesnt guarantee that you can opt out of an AI-based review of your application and still be considered for a role (all the law says is that a company has to gain your consent before using AI; it doesnt require that hiring managers give you an alternative method).

Its hard to feel that that consent is going to be super meaningful if the alternative is that you get no shot at the job at all, said Rieke. He added that theres no guarantee that the consent and explanation the law requires will be useful; for instance, the explanation could be so broad and high-level that its not helpful.

If I were a lawyer for one of these vendors, I would say something like, Look, we use the video, including the audio language and visual content, to predict your performance for this position using tens of thousands of factors, said Rieke. If I was feeling really conservative, I might name a couple general categories of competency. (He also points out that the law doesnt define artificial intelligence, which means its difficult to tell what companies and what types of systems the law actually applies to).

Because the law is limited to AI thats used in video interviews, the company it most clearly applies to is Utah-based HireVue, a popular job interview platform that offers employers an algorithm-based analysis of recorded video interviews. Heres how it works: You answer pre-selected questions over your computer or phone camera. Then, an algorithm developed by HireVue analyzes how youve answered the questions, and sometimes even your facial expressions, to make predictions about your fit for a particular position.

HireVue says it already has about 100 clients using this artificial intelligence-based feature, including major companies like Unilever and Hilton.

Some candidates who have used HireVues system complain that the process is awkward and impersonal. But thats not the only problem. Algorithms are not inherently objective, and they reflect the data used to train them and the people that design them. That means they can inherit, and even amplify, societal biases, including racism and sexism. And even if an algorithm is explicitly instructed not to consider factors like a persons name, it can still learn proxies for protected identities (for instance, an algorithm could learn to discriminate against people who have gone to a womens college).

Facial recognition tech, in particular, has faced criticism for struggling to identify and characterize the faces of people with darker skin, women, and trans and non-binary people, among other minority groups. Critics also say that emotion (or affect) recognition technology in particular, which purports to make judgments about a persons emotions based on their facial expressions, is scientifically flawed. Thats why one research nonprofit, the AI Now Institute, called for the prohibition of such technology in high-stakes decision-making including job applicant vetting.

[W]hile youre being interviewed, theres a camera thats recording you, and its recording all of your micro facial expressions and all of the gestures youre using, the intonation of your voice, and then pattern matching those things that they can detect with their highest performers, AI Now Institute co-founder Kate Crawford told Recodes Kara Swisher earlier this year. [It] might sound like a good idea, but think about how youre basically just hiring people who look like the people you already have.

Even members of Congress are worried about that technology. In 2018, US Sens. Kamala Harris, Elizabeth Warren, and Patty Murray wrote to the Equal Employment Opportunity Commission, the federal agency charged with investigating employment discrimination, asking whether such facial analysis technology could violate anti-discrimination laws.

Despite being one of the first laws to regulate these tools, the Illinois law doesnt address concerns about bias. No federal legislation explicitly regulates these AI-based hiring systems. Instead, employment lawyers say such AI tools are generally subject to the Uniform Guidelines, employment discrimination standards created by several federal agencies back in 1978 (you can read more about that here).

The EEOC did not respond to Recodes multiple requests for comment.

Meanwhile, its not clear how, under Illinois new law, companies like HireVue will go about explaining the characteristics in applicants that its AI considers, given that the company claims that its algorithms can weigh up to tens of thousands of factors (it says it removes factors that are not predictive of job success).

The law also doesnt explain what an applicant might be entitled to if a company violates one of its provisions. Law firms advising clients on compliance have also noted that its not clear whether the law applies exclusively to businesses filling a position in Illinois, or just interviews that take place in the state. Neither Illinois State Sen. Iris Martinez nor Illinois Rep. Jaime M. Andrade, legislators who worked on the law, responded to a request for comment by the time of publication.

HireVues CEO Kevin Parker said in a blog post that the law entails very little, if any, change because its platform already complies with GDPRs principles of transparency, privacy, and the right to be forgotten. [W]e believe every job interview should be fair and objective, and that candidates should understand how theyre being evaluated. This is fair game, and its good for both candidates and companies, he wrote in August.

A spokesperson for HireVue said the decision to provide an alternative to an AI-based analysis is up to the company thats hiring, but argued that those alternatives can more time-consuming for candidates. If a candidate believes that a system is biased, the spokesperson said recourse options are the same as when a candidate believes that any part of the hiring process, or any individual interviewer, was unfairly biased against them.

Under the new law in Illinois, if you participate in a video interview that uses AI tech, you can ask for your footage to be deleted after the fact. But its worth noting that the law appears to still give the company enough time to train its model on the results of your job interview even if you think the final decision was problematic.

This gives these AI hiring companies room to continue to learn, says Rieke. Theyre going to delete the underlying video, but any learning or improvement to their systems they get to keep.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Link:

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews - Vox.com

How This Cofounder Created An Artificial Intelligence Styling Company To Help Consumers Shop – Forbes

Michelle Harrison Bacharach, the cofounder and CEO of FindMine, an AI styling company, has designed a technology, Complete the Look, that creates complete outfits around retailers products. It blends the art of styling with the ease of automation to represent a companys brand(s) at scale and help answer the question how do I wear this? The technology shows shoppers how to wear clothes with accessories. The company uses artificial intelligence to scale out the guidance that the retailer would provide. FindMine serves over 1.5 billion requests for outfits per year across e-commerce and mobile platforms, and AOV (average order value) and conversions by up to 150% with full outfits.

Michelle Bacharach, Cofounder and CEO of FINDMINE, an AI styling company.

I'm picky about user experiences, Bacharach explains. When I was a consumer in my life, shopping, I was always frustrated by the friction that it caused that I was sold a product in isolation. If I buy a scarf, what do I wear with the scarf? What are the shoes and the top and the jacket? Just answer that question for me when I buy the scarf. Why is it so hard? I started asking those questions as a consumer. Then I started looking into why retailers don't do that. It's because they have a bunch of friction on their side. They have to put together the shirt and the shoe and the pant and the bag and the jacket that go with that outfit. So, because it's manual, and they have tens of thousands of products and products come and go so frequently, it's literally impossible to keep up with. It's physically impossible for them to give an answer to every consumerMy hypothesis was that I would spend more money if they sold me all the other pieces and showed me how to use it. I started looking into [the hypothesis], and it turned out to be true; consumers spend more money when they actually understand the whole package.

Bacharach began working for a startup in Silicon Valley after graduating from college. She focused on the user experience analysis and product management, which meant she looked at customer service tickets and the analytical data around how customers were using the products. After the analysis, shed then make fixes and suggestions for new features and prioritizing those with the tech team.

She always knew she wanted to start her own company. Working at the startup provided her the opportunity to understand how all the different sectors of an organization operated. However, she had always been curious about the possibility of acting. She decided to move to Los Angeles to try to become a professional actress. I ended up deciding that the part of acting that I liked the most was auditioning and competing for the job and positioning and marketing myself, she explains. If you talk to any other actors, thats the part they hate the most. I realized that I should go to business school and focus on the entertainment industry because that's the part of it that really resonated with me.

FINDMINE is part of the SAP CX innovation ecosystem and is currently part of the latest SAP.iO ... [+] Foundry startup accelerator in San Francisco.

After graduating from business school, Bacharach entered the corporate world, where she worked on corporate strategy and product management. The company she worked for underwent a culture shift, which made it difficult working there. At that point, she had two options. She could either find another position with a different company or start her own business venture. I didn't really know what that thing was going to be, Bacharach expresses. I used that as kind of a forcing function to sit down with my list of ideas and decide, what the heck am I going to work on. I thought about it as a time off, like a six-month sabbatical to try to figure out what we're doing. Then I'm going to get invested in from my idea, and then I'm going to be back on the salary wagon and be able to make a living again. I thought it's all going to be so easy. That's what started the journey of me becoming an entrepreneur. It took two-and-a-half years before she earned a livable salary.

I worked for a startup, she states. I watched other people do it. I was a consultant to start off. I worked in corporate America. So, I saw the other side of the coin in the way that that world functions. I didn't want to do this for the long term. I like the early stages of stuff. In retrospect, I guess I did prepare myself, but I didn't know it while I was going through it. I just jumped in.

As Bacharach continues to expand FindMine with the ever-updating artificial intelligence technology, she focuses on the following essential steps to help her with each pivot:

Michelle Bacharach, Cofounder and CEO of FINDMINE, sat down with John Furrier at the Intel AI Lounge ... [+] at South by Southwest 2017 in Austin, Texas.

Don't worry about getting 100% right, Bacharach concludes. Dont look at people who are successful and say, oh, wow. They're so different from me. I can never do that. Look at them and say they're exactly the same as me. They're just two or three years ahead in terms of their learnings and findings. I have to do that same thing, but for whatever I want to start.

See the article here:

How This Cofounder Created An Artificial Intelligence Styling Company To Help Consumers Shop - Forbes

The U.S. Patent and Trademark Office Takes on Artificial Intelligence – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

More here:

The U.S. Patent and Trademark Office Takes on Artificial Intelligence - JD Supra

Baidu looks to work with Indian institutions on AI – BusinessLine

Chinas largest search engine Baidu is looking to work with Indian institutions in future to make a better world through innovation, said Robin Li, Co-Founder, CEO and Chairman of Baidu.

India is one of the fastest growing smartphone markets in the world, and very large developing country, right next to China. Both the countries have been growing at a fast pace in the last few decades. For the next decade, we will be more optimistic, he said in a talk at IIT Madras tech fest, Shaastra 2020, titled Innovation in the age of Artificial Intelligence (AI).

Outside China, Baidu has presence in markets like Japan, Thailand and Egypt. However, the company's main product - search engine - is very much in China. Once in the age of AI, search will be very different from what is seen today. Once we transform the search into a different product, we will be ready to launch that internationally," he said without committing anything specific on foray into India.

Since founding Baidu in January 2000, Robin has led the company to be Chinas largest search engine with over 70 per cent market share. China is among the four countries globally - alongside the US, Russia and South Korea - to possess its own core search engine technology. Through innovations such as Box Computing to Baidus Open Data and Open App Platform, Robin has substantially advanced the theoretical framework of Chinas Internet sciences, propelling Baidu to be the vanguard of Chinas Internet industry. Baidu is also the largest AI platform company in China.

Li said that the previous decade was that of the Internet but the coming decade is that of intelligent economy with new modes of human-machine interaction. AI is transforming a lot of industries for higher efficiency and lower services. For instance, banks are finding it difficult to open branches but virtual assistant is used to open an account. Customers are more comfortable with virtual person than a real person.

In the education sector, every student can have a personal assistant while the pharma industry accelerate the pace of drug development with many start-ups already doing this. AI is also transforming transportation by helping reduction in traffic delays by 20-30 per cent, he said.

In China, using AI Baidu is helping in finding missing people and already 9,000 missing people have been found. AI can make one immortal. When everything about you can be digitised, computers can learn all about you, creating a digital copy of anyone, he said.

In the past ten years, people were dependent on mobile phones. But in the next ten years, people will be less dependent on the mobile phones because wherever they go there will be surrounding sensors, infrastructure that can answer your questions that concerns you. You may not be required to pull out your mobile phone every time to find an answer. This is the power of AI, he added.

Visit link:

Baidu looks to work with Indian institutions on AI - BusinessLine

Top Movies Of 2019 That Depicted Artificial Intelligence (AI) – Analytics India Magazine

Artificial intelligence (AI) is creating a great impact on the world by enabling computers to learn on their own. While in the real world AI is still focused on solving narrow problems, we see a whole different face of AI in the fictional world of science fiction movies which predominantly depict the rise of artificial general intelligence as a threat for human civilization. As a continuation of the trend, here we take a look at how artificial intelligence was depicted in 2019 movies.

A warning in advance the following listicle is filled with SPOILERS.

Terminator: Dark Fate the sixth film of the Terminator movie franchise, featured a super-intelligent Terminator named Gabriel designated as Rev-9, and was sent from the future to kill a young woman (Dani) who is set to become an important figure in the Human Resistance against Skynet. To fight the Rev-9 Terminator, the Human Resistance from the future also sends Grace, a robot soldier, back in time, to defend Dani. Grace is joined by Sarah Connor, and the now-obsolete ageing model of T-800 Terminator the original killer robot in the first movie (1984).

We all know Tony Stark as the man of advanced technology and when it comes to artificial intelligence, Stark has nothing short of state-of-the-art technology in Marvels cinematic universe. One such artificial intelligence was the Even Dead, Im The Hero (E.D.I.T.H.) which we witnessed in the 2019 movie Spider-Man: Far From Home. EDITH is an augmented reality security defence and artificial tactical intelligence system created by Tony Stark and was given to Peter Parker following Starks death. It is encompassed in a pair of sunglasses and gives its users access to Stark Industries global satellite network along with an array of missiles and drones.

I Am Mother is a post-apocalyptic movie which was released in 2019. The films plot is focused on a mother-daughter relationship where the mother is a robot designed to repopulate Earth. The robot mother takes care of her human child known as daughter who was born with artificial gestation. The duo stays in a secure bunker alone until another human woman arrives there. The daughter now faces a predicament of whom to trust- her robot mother or a fellow human who is asking the daughter to come with her.

Wandering Earth is another 2019 Chinese post-apocalyptic film with a plot involving Earths imminent crash into another planet and a group of family members and soldiers efforts to save it. The films artificial intelligence character is OSS, a computer system which was programmed to warn people in the earth space station. A significant subplot of the film is focused on protagonist Liu Peiqiangs struggle with MOSS which forced the space station to go into low energy mode during the crash as per its programming from the United Earth Government. In the end, Liu Peiqiang resists and ultimately sets MOSS on fire to help save the Earth.

James Camerons futuristic action epic for 2019 Alita: Battle Angel is a sci-fi action film which depicts the human civilization in an extremely advanced stage of transhumanism. The movie describes the dystopian future where robots and autonomous systems are extremely powerful. To elaborate, in one of the initial scenes of the movie, Ido attaches a cyborg body to a human brain he found (from another cyborg) and names her Alita after his deceased daughter, which is an epitome of advancements in AI and robotics.

Jexi is the only Hollywood rom-com movie depicting artificial intelligence in 2019. The movie features an AI-based operating system called Jexi with recognizable human behaviour and reminds the audience of the previously acclaimed film Her, which was released in 2014. But unlike Her, the movie goes the other way around depicting how the AI system becomes emotionally attached to its socially-awkward owner, Phil. The biggest shock of the comedy film is when Jexi the AI which lives inside Phils cellphone acts to control his life and even chases him angrily using a self-driving car.

Hi, AI is a German documentary which was released in early 2019. The documentary was based on Chucks relationship with Harmony an advanced humanoid robot. The films depiction of artificial intelligence is in sharp contrast with other fictional movies on AI. The documentary also depicts that even though human research is moving in the direction of creating advanced robots, interactions with robots still dont have the same depth as human conversations. The film won the Max Ophls Prize for best documentary for the year.

comments

Vishal Chawla is a senior tech journalist at Analytics India Magazine (AIM) and writes on the latest in the world of analytics, AI and other emerging technologies. Previously, he was a senior correspondent for IDG CIO and ComputerWorld. Write to him at vishal.chawla@analyticsindiamag.com

Read more:

Top Movies Of 2019 That Depicted Artificial Intelligence (AI) - Analytics India Magazine

Shocking ways AI technology will revolutionise every day industries in YOUR lifetime – Express.co.uk

Science fiction has helped shaped societys understanding and expectation of advanced AI technology in the future. However, Shadow Robot Company Director Rich Walker argued artificial intelligence technology could be used in industries we would not expect. While speaking to Express.co.uk, he explained that a new A.I tech could be introduced to be used in sectors such as the estate agency or booking services.

He added massive leaps in AI capabilities in recent years had raised expectations of what people believe artificial intelligence can be used for.

He said: AI technology has really been promising a lot for a very long time.

In the last few years we have really started to see some very impressive and surprising successes.

Self-driving cars are starting to be something that has gone from a complete fairy pipe-dream to the question of when are we going to see a self-driving car, because surely we can get one now.

DON'T MISS:AI WILL lead to human extinction if one crucial change isnt made

I think what will happen in the next couple of year is we will see some areas that we werent expecting suddenly being done by AI

Everyone will be like, yes, of course, we could have artificial intelligence in this industry.

Maybe it will be an estate agency or train booking.

Something that is a complicated annoying problem.

Link:

Shocking ways AI technology will revolutionise every day industries in YOUR lifetime - Express.co.uk

Artificial intelligence takes scam to a whole new level – The Jackson Sun

RANDY HUTCHINSON, Better Business Bureau Published 12:54 a.m. CT Jan. 1, 2020

Imagine you wired hundreds of thousands of dollars somewhere based on a call from your boss, whose voice you recognized, only to find out you were talking to a machine and the money is lost. One company executive doesnt have to imagine it happening. He and his company were victims of what some experts say is one of the first cases of voice-mimicking software, a form of artificial intelligence (AI), being used in a scam.

In a common version of the Business Email Compromise scam, an employee in a companys accounting department wires money somewhere based on what appears to be a legitimate email from the CEO, CFO or other high-ranking executive. I wrote a column last year noting that reported losses to the scam had grown from $226 million in 2014 to $676 million in 2017. The FBI says losses doubled in 2018 to $1.8 billion and recommends making a phone call to verify the legitimacy of the request rather than relying on an email.

But now you may not even be able to trust voice instructions. The CEO of a British firm received what he thought was a call from the CEO of his parent company in Germany instructing him to wire $243,000 to the bank account of a supplier in Hungary. The call was actually originated by a crook using AI voice technology to mimic the bosss voice. The crooks moved the money from Hungary to Mexico to other locations.

An executive with the firms insurance company, which ultimately covered the loss, told The Wall Street Journal that the victim recognized the subtle German accent in his bosss voice and moreover that it carried the mans melody. The victim became suspicious when he received a follow-up call from the boss that originated in Austria requesting another payment be made. He didnt make that one, but the damage was already done.

Google says crooks may also synthesize speech to fool voice authentication systems or create forged audio recordings to defame public figures. It launched a challenge to researchers to develop countermeasures against spoofed speech.

Many companies are working on voice-synthesis software and some of it is available for free. The insurer thinks the crooks used commercially available software to steal the $243,000 from its client.

Many scams rely on victims letting their emotions outrun their common sense. An example is the Grandparent Scam, in which an elderly person receives a phone call purportedly from a grandchild in trouble and needing money. Victims have panicked and wired thousands of dollars before ultimately determining that the grandchild was safe and sound at home.

The crooks often invent some reason why the grandchilds voice may not sound right, such as the child having been in an accident or it being a poor connection. How much more successful might that scam be if the voice actually sounds like the grandchild? The executive who wired the $243,000 said he thought the request was strange, but the voice sounded so much like his boss that he felt he had to comply.

The BBB recommends companies install additional verification steps for wiring money, including calling the requestor back on a number known to be authentic.

Randy Hutchinson is the president of the Better Business Bureau of the Mid-South. Reach him at 901-757-8607.

Read or Share this story: https://www.jacksonsun.com/story/news/2020/01/01/artificial-intelligence-takes-scam-whole-new-level/2719833001/

Continued here:

Artificial intelligence takes scam to a whole new level - The Jackson Sun

Global Industrial Artificial Intelligence Market 2019 Research by Business Analysis, Growth Strategy and Industry Development to 2024 – Food &…

In the market research study namely, Global Industrial Artificial Intelligence Market 2019 by Manufacturers, Countries, Type and Application, Forecast to 2024, a comprehensive discussion on the market current flow and patterns, market share, sales volume, informative diagrams, industry development drivers, supply and demand, and other key aspects has been given. Its an important component for various stakeholders like traders, CEOs, buyers, providers, and others. The report provides guidance to exploring opportunities in the market by adding global and regional data as well as over top key players profiles. The global Industrial Artificial Intelligence market research file is an in-depth analysis that focuses on market development trends, opportunities, challenges, drivers, and limitations.

The market is analyzed by companies, and regions based on rate, value and gross. The report tracks the major market events such as product launches, technological developments, mergers and acquisitions, and the innovative business strategies acquired by key market players. The report contains types and applications appreciable consumption figures. The report highlights the markets current and conjecture development progress areas. It covers an in-depth analysis of the market size (revenue), market share, major market segments, and different geographic zones, the forecast for 2019-2024, and key market players.

DOWNLOAD FREE SAMPLE REPORT: https://www.fiormarkets.com/report/global-industrial-artificial-intelligence-market-2018-by-manufacturers-299826.html#sample

This report focuses on top manufacturers in global Industrial Artificial Intelligence market, with production, price, revenue, and market share for each manufacturer, covering: Intel Corporation, Siemens AG, IBM Corporation, Alphabet Inc, Microsoft Corporation, Cisco Systems, Inc, General Electric Company, Data RPM, Sight Machine, General Vision, Inc, Rockwell, Automation Inc, Mitsubishi Electric Corporation, Oracle Corporation, SAP SE

What Makes The Report Excellent?

The report offers information on market segmentation by type, application, and regions. The report specifies which product has the highest penetration, profit margins, and R&D status. The research covers the current market size of the global Industrial Artificial Intelligence market and its growth ratio based on history statistics 2014-2018. Each company profiled in the report is assessed for its market growth.

This Report Segments The Market:

Market by product type, 2014-2024: Type 1, Type 2, Others

Market by application, 2014-2024: Application 1, Application 2, Others

For a comprehensive understanding of market dynamics, the Industrial Artificial Intelligence market is analyzed across key geographies namely: North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), South America (Brazil, Argentina, Colombia etc.), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)

ACCESS FULL REPORT: https://www.fiormarkets.com/report/global-industrial-artificial-intelligence-market-2018-by-manufacturers-299826.html

Following Queries Are Answered In The Report:-

Moreover, the global Industrial Artificial Intelligence market report calculates the production and consumption rate. Upstream raw material suppliers and downstream buyers of this industry are explained. A competitive dashboard or company share analysis is also covered. It executes through various research findings, deals, retailers, merchants, conclusion, data sources, and appendix.

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

This post was originally published on Food and Beverage Herald

View post:

Global Industrial Artificial Intelligence Market 2019 Research by Business Analysis, Growth Strategy and Industry Development to 2024 - Food &...

IIT Hyderabad to collaborate with Telangana government on artificial intelligence – India Today

IIT Hyderabad will also assist Telangana State in developing a strategy for artificial intelligence.

Indian Institute of Technology (IIT) Hyderabad is going to collaborate with the Government of Telangana for research on artificial intelligence. The institute is partnering with the Information Technology, Electronics and Communication (ITE&C) Department, Government of Telangana, for building/identifying quality datasets, along with third parties such as the industry.

They will also work on education and training to prepare and deliver content and curriculum on AI courses to be delivered to college students along with industry participants.

The MoU was signed by BS Murty, Director, IIT Hyderabad, and Jayesh Ranjan, IAS, Principal Secretary to Government of Telangana, Departments of Information Technology (IT) and Industries and Commerce (I&C) during an event held on January 2 as part of '2020: Declaring Telangana's Year of AI' initiative. Several other MoUs with other organizations were also signed by the Government of Telangana during this occasion.

The Telangana government declared 2020 as the 'Year of Artificial Intelligence' with the objective of promoting its use in various sectors ranging from urban transportation and healthcare to agriculture and others. The ITE&C Department aims to develop the ecosystem for the industry and to leverage emerging technologies for improving service delivery as part of this collaboration.

IIT Hyderabad will also assist the Telangana State in developing a strategy for AI/HPC (Artificial Intelligence / High-Performance Computing) infrastructure for various state needs and provide technology mentorship to identified partners for exploring and building AI PoCs (Point of Contacts).

The Telangana State Information Technology, Electronics and Communication Department's (ITE&C Department) is a Telangana Government department with a mandate to promote the use of Information Technology (IT) and act as a promoter/facilitator in the field of Information Technology in the state and build an IT driven continuum of Government services.

The vision of the ITE&C department is to leverage IT not only for effective and efficient governance, but also for sustainable economic development and inclusive social development. Its mission is to facilitate collaborative and innovative IT solutions, and to plan for the future growth while protecting and enhancing the quality of life.

Read: IIT Hyderabad researcher finds people from rural Bihar migrate to urban areas but do not settle

Also read: IIT Hyderabad researchers unravel working of protein that repairs damaged DNA

Read the original post:

IIT Hyderabad to collaborate with Telangana government on artificial intelligence - India Today

THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue – Business Insider India

This is a preview of a research report from Business Insider Intelligence. To learn more about Business Insider Intelligence, click here. Current subscribers can log in and read the report here.

New technology is disrupting legacy automakers' business models and dampening consumer demand for purchasing vehicles. Tech-mediated models of transportation - like ride-hailing, for instance - are presenting would-be car owners with alternatives to purchasing vehicles.

In fact, a study by ride-hailing giant Lyft found that in 2017, almost 250,000 of its passengers sold their own vehicle or abandoned the idea of replacing their current car due to the availability of ride-hailing services.

This will enable automakers to take advantage of what will amount to billions of dollars in added value. For example, self-driving technology will present a $556 billion market by 2026, growing at a 39% CAGR from $54 billion in 2019, per Allied Market Research.

But firms face some major hurdles when integrating AI into their operations. Many companies are not presently equipped to begin producing AI-based solutions, which often require a specialized workforce, new infrastructure, and updated security protocol. As such, it's unsurprising that the main barriers to AI adoption are high costs, lack of talent, and lack of trust. Automakers must overcome these barriers to succeed with AI-based projects.

In The AI In Transportation Report, Business Insider Intelligence will discuss the forces driving transportation firms to AI, the market value of the technology across segments of the industry, and the potential barriers to its adoption. We will also show how some of the leading companies in the space have successfully overcome those barriers and are using AI to adapt to the digital age.

Here are some key takeaways from the report:

In full, the report:

The choice is yours. But however you decide to acquire this report, you've given yourself a powerful advantage in your understanding of AI in transportation.

Go here to see the original:

THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue - Business Insider India

Revisiting the rise of A.I.: How far has artificial intelligence come since 2010? – Digital Trends

2010 doesnt seem all that long ago. Facebook was already a giant, time-consuming leviathan; smartphones and the iPad were a daily part of peoples lives; The Walking Dead was a big hit on televisions across America; and the most talked-about popular musical artists were the likes of Taylor Swift and Justin Bieber. So pretty much like life as we enter 2020, then? Perhaps in some ways.

One place that things most definitely have moved on in leaps and bounds, however, is on the artificial intelligence front. Over the past decade, A.I. has made some huge advances, both technically and in the public consciousness, that mark this out as one of the most important ten year stretches in the fields history. What have been the biggest advances? Funny you should ask; Ive just written a list on exactly that topic.

To most people, few things say A.I. is here quite like seeing an artificial intelligence defeat two champion Jeopardy! players on prime time television. Thats exactly what happened in 2011, when IBMs Watson computer trounced Brad Rutter and Ken Jennings, the two highest-earning American game show contestants of all time at the popular quiz show.

Its easy to dismiss attention-grabbing public displays of machine intelligence as being more about hype-driven spectacles than serious, objective demonstrations. What IBM had developed was seriously impressive, though. Unlike a game such as chess, which features rigid rules and a limited board, Jeopardy! is less easily predictable. Questions can be about anything and often involve complex wordplay, such as puns.

I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy! was still decades away, Jennings told me when I was writing my book Thinking Machines. Or at least I thought that it was. At the end of the game, Jennings scribbled a sentence on his answer board and held it up for the cameras. It read: I for one welcome our new robot overlords.

October 2011 is most widely remembered by Apple fans as the month in which company co-founder and CEO Steve Jobs passed away at the age of 56. However, it was also the month in which Apple unveiled its A.I. assistant Siri with the iPhone 4s.

The concept of an A.I. you could communicate with via spoken words had been dreamed about for decades. Former Apple CEO had, remarkably, predicted a Siri-style assistant back in the 1980s; getting the date of Siri right almost down to the month. But Siri was still a remarkable achievement. True, its initial implementation had some glaring weaknesses, and Apple arguably has never managed to offer a flawless smart assistant.Nonetheless, it introduced a new type of technology that was quickly pounced on for everything from Google Assistant to Microsofts Cortana to Samsungs Bixby.

Of all the tech giant, Amazon has arguably done the most to advance the A.I. assistant in the years since. Its Alexa-powered Echo speakers have not only shown the potential of these A.I. assistants; theyve demonstrated that theyre compelling enough to exist as standalone pieces of hardware. Today, voice-based assistants are so commonplace they barely even register. Ten years ago most people had never used one.

Deep learning neural networks are not wholly an invention of the 2010s. The basis for todays artificial neural networks traces back to a 1943 paper by researchers Warren McCulloch and Walter Pitts. A lot of the theoretical work underpinning neural nets, such as the breakthrough backpropagation algorithm, were pioneered in the 1980s. Some of the advances that lead directly to modern deep learning were carried out in the first years if the 2000s with work like Geoff Hintons advances in unsupervised learning.

But the 2010s are the decade the technology went mainstream. In 2010,researchers George Dahl and Abdel-rahman Mohamed demonstrated that deep learning speech recognition tools could beat what were then the state-of-the-art industry approaches. After that, the floodgates were opened.From image recognition (example: Jeff Dean and Andrew Ngs famous paper on identifying cats) to machine translation, barely a week went by when the world wasnt reminded just how powerful deep learning could be.

It wasnt just a good PR campaign either, the way an unknown artist might finally stumble across fame and fortune after doing the same way in obscurity for decades. The 2010s are the decade in which the quantity of data exploded, making it possible to leverage deep learning in a way that simply wouldnt have been possible at any previous point in history.

Of all the companies doing amazing AI work, DeepMind deserves its own entry on this list. Founded in September 2010, most people hadnt heard of deep learning company DeepMind until it was bought by Google for what seemed like a bonkers $500 million in January 2014. DeepMind has made up for it in the years since, though.

Much of DeepMinds most public-facing work has involved the development of game-playing AIs, capable of mastering computer games ranging from classic Atari titles like Breakout and Space Invaders (with the help of some handy reinforcement learning algorithms) to, more recently, attempts at StarCraft II and Quake III Arena.

Demonstrating the core tenet of machine learning, these game-playing A.I.s got better the more they played. In the process, they were able to form new strategies that, in some cases, even their human creators werent familiar with. All of this work helped set the stage for DeepMinds biggest success of all

As this list has already shown, there are no shortage of examples when it comes to A.I. beating human players at a variety of games. But Go, a Chinese board game in which the aim is to surround more territory than your opponent, was different. Unlike other games in which players could be beaten simply by number crunching faster than humans are capable of, in Go the total number of allowable board positions is mind-bogglingly staggering: far more than the total number of atoms in the universe. That makes brute force attempts to calculate answers virtually impossible, even using a supercomputer.

Nonetheless, DeepMind managed it. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 1919 board. The next year, 60 million people tuned in live to see the worlds greatest Go player, Lee Sedol, lose to AlphaGo. By the end of the series AlphaGo had beaten Sedol four games to one.

In November 2019, Sedol announced his intentions to retire as a professional Go player. He cited A.I. as the reason.Even if I become the number one, there is an entity that cannot be defeated, he said.Imagine if Lebron James announced he was quitting basketball because a robot was better at shooting hoops that he was. Thats the equivalent!

In the first years of the twenty-first century, the idea of an autonomous car seemed like it would never move beyond science fiction. In MIT and Harvard economists Frank Levy and Richard Murnanes 2004 book The New Division of Labor, driving a vehicle was described as a task too complex for machines to carry out. Executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a drivers behavior, they wrote.

In 2010, Google officially unveiled its autonomous car program, now called Waymo. Over the decade that followed, dozens of other companies (including tech heavy hitters like Apple) have started to develop their own self-driving vehicles. Collectively these cars have driven thousands of miles on public roads; apparently proving less accident-prone than humans in the process.

Foolproof full autonomy is still a work-in-progress, but this was nonetheless one of the most visible demonstrations of A.I. in action during the 2010s.

The dirty secret of much of todays A.I. is that its core algorithms, the technologies that make it tick, were actually developed several decades ago. Whats changed is the processing power available to run these algorithms and the massive amounts of data they have to train on. Hearing about a wholly original approach to building A.I. tools is therefore surprisingly rare.

Generative adversarial networks certainly qualify. Often abbreviated to GANs, this class of machine learning system was invented by Ian Goodfellow and colleagues in 2014. No less an authority than A.I. expert Yann LeCun has described it as the coolest idea in machine learning in the last twenty years.

At least conceptually, the theory behind GANs is pretty straightforward: take two cutting edge artificial neural networks and pit them against one another. One network creates something, such as a generated image. The other network then attempts to work out which images are computer-generated and which are not. Over time, the generative adversarial process allows the generator network to become sufficiently good at creating images that they can successfully fool the discriminator network every time.

The power of Generative Adversarial Networks were seen most widely when a collective of artists used them to create original paintings developed by A.I. The result sold for a shockingly large amount of money at a Christies auction in 2018.

Original post:

Revisiting the rise of A.I.: How far has artificial intelligence come since 2010? - Digital Trends

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence – Imaging Technology News

December 27, 2019 Artificial intelligence (AI) technology developed by the RIKEN Center for Advanced Intelligence Project (AIP) in Japan has successfully found features in pathology images from human cancer patients, without annotation, that could be understood by human doctors. Further, the AI identified features relevant to cancer prognosis that were not previously noted by pathologists, leading to a higher accuracy of prostate cancer recurrence compared to pathologist-based diagnosis. Combining the predictions made by the AI with predictions by human pathologists led to an even greater accuracy.

According to Yoichiro Yamamoto, M.D., Ph.D., the first author of the study published in Nature Communications, "This technology could contribute to personalized medicine by making highly accurate prediction of cancer recurrence possible by acquiring new knowledge from images. It could also contribute to understanding how AI can be used safely in medicine by helping to resolve the issue of AI being seen as a 'black box.'"

The research group led by Yamamoto and Go Kimura, in collaboration with a number of university hospitals in Japan, adopted an approach called "unsupervised learning." As long as humans teach the AI, it is not possible to acquire knowledge beyond what is currently known. Rather than being "taught" medical knowledge, the AI was asked to learn using unsupervised deep neural networks, known as autoencoders, without being given any medical knowledge. The researchers developed a method for translating the features found by the AI only numbers initially into high-resolution images that can be understood by humans.

To perform this feat the group acquired 13,188 whole-mount pathology slide images of the prostate from Nippon Medical School Hospital (NMSH), The amount of data was enormous, equivalent to approximately 86 billion image patches (sub-images divided for deep neural networks), and the computation was performed on AIP's powerful RAIDEN supercomputer.

The AI learned using pathology images without diagnostic annotation from 11 million image patches. Features found by AI included cancer diagnostic criteria that have been used worldwide, on the Gleason score, but also features involving the stroma connective tissues supporting an organ in non-cancer areas that experts were not aware of. In order to evaluate these AI-found features, the research group verified the performance of recurrence prediction using the remaining cases from NMSH (internal validation). The group found that the features discovered by the AI were more accurate (AUC=0.820) than predictions made based on the human-established cancer criteria developed by pathologists, the Gleason score (AUC=0.744). Furthermore, combining both AI-found features and the human-established criteria predicted the recurrence more accurately than using either method alone (AUC=0.842). The group confirmed the results using another dataset including 2,276 whole-mount pathology images (10 billion image patches) from St. Marianna University Hospital and Aichi Medical University Hospital (external validation).

"I was very happy," said Yamamoto, "to discover that the AI was able to identify cancer on its own from unannotated pathology images. I was extremely surprised to see that AI found features that can be used to predict recurrence that pathologists had not identified."

He continued, "We have shown that AI can automatically acquire human-understandable knowledge from diagnostic annotation-free histopathology images. This 'newborn' knowledge could be useful for patients by allowing highly-accurate predictions of cancer recurrence. What is very nice is that we found that combining the AI's predictions with those of a pathologist increased the accuracy even further, showing that AI can be used hand-in-hand with doctors to improve medical care. In addition, the AI can be used as a tool to discover characteristics of diseases that have not been noted so far, and since it does not require human knowledge, it could be used in other fields outside medicine."

For more information:www.riken.jp/en/research/labs/aip/

See the article here:

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence - Imaging Technology News

Artificial intelligence is helping us talk to animals (yes, really) – Wired.co.uk

Each time any of us uses a tool, such as Gmail, where theres a powerful agent to help correct our spellings, and suggest sentence endings, theres an AI machine in the background, steadily getting better and better at understanding language. Sentence structures are parsed, word choices understood, idioms recognised.

That exact capability could, in 2020, grant the ability to speak with other large animals. Really. Maybe even faster than brain-computer interfaces will take the stage.

Our AI-enhanced abilities to decode languages have reached a point where they could start to parse languages not spoken by anyone alive. Recently, researchers from MIT and Google applied these abilities to ancient scripts Linear B and Ugaritic (a precursor of Hebrew) with reasonable success (no luck so far with the older, and as-yet undeciphered Linear A).

First, word-to-word relations for a specific language are mapped, using vast databases of text. The system searches texts to see how often each word appears next to every other word. This pattern of appearances is a unique signature that defines the word in a multidimensional parameter space. Researchers estimate that languages all languages can be best described as having 600 independent dimensions of relationships, where each word-word relationship can be seen as a vector in this space. This vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with.

These vectors obey some simple rules. For example: king man + woman = queen. Any sentence can be described as a set of vectors that in turn form a trajectory through the word space.

These relationships persist even when a language has multiple words for related concepts: the famed near-100 words Inuits have for snow will all be in similar dimensional spaces each time someone talks about snow, it will always be in a similar linguistic context.

Take a leap. Imagine that whale songs are communicating in a word-like structure. Then, what if the relationships that whales have for their ideas have dimensional relationships similar to those we see in human languages?

That means we should be able to map key elements of whale songs to dimensional spaces, and thus to comprehend what whales are talking about and perhaps to talk to and hear back from them. Remember: some whales have brain volumes three times larger than adult humans, larger cortical areas, and lower but comparable neuron counts. African elephants have three times as many neurons as humans, but in very different distributions than are seen in our own brains. It seems reasonable to assume that the other large mammals on earth, at the very least, have thinking and communicating and learning attributes we can connect with.

What are the key elements of whale songs and of elephant sounds? Phonemes? Blocks of repeated sounds? Tones? Nobody knows, yet, but at least the journey has begun. Projects such as the Earth Species Project aim to put the tools of our time particularly artificial intelligence, and all that we have learned in using computers to understand our own languages to the awesome task of hearing what animals have to say to each other, and to us.

There is something deeply comforting to think that AI language tools could do something so beautiful, going beyond completing our emails and putting ads in front of us, to knitting together all thinking species. That, we perhaps can all agree, is a better and perhaps nearer-term ideal to reach than brain-computer communications. The beauty of communicating with them will then be joined to the market ideal of talking to our pet dogs. (Cats may remain beyond reach.)

Mary Lou Jepsen is the founder and CEO of Openwater. John Ryan, her husband, is a former partner at Monitor Group

The illegal trade of Siberian mammoth tusks revealed

I ditched Google for DuckDuckGo. Here's why you should too

How to use psychology to get people to answer your emails

The WIRED Recommends guide to the best Black Friday deals

Get The Email from WIRED, your no-nonsense briefing on all the biggest stories in technology, business and science. In your inbox every weekday at 12pm sharp.

by entering your email address, you agree to our privacy policy

Thank You. You have successfully subscribed to our newsletter. You will hear from us shortly.

Sorry, you have entered an invalid email. Please refresh and try again.

Here is the original post:

Artificial intelligence is helping us talk to animals (yes, really) - Wired.co.uk

AI IN BANKING: Artificial intelligence could be a near $450 billion opportunity for banks – here are the strat – Business Insider India

Discussions, articles, and reports about the AI opportunity across the financial services industry continue to proliferate amid considerable hype around the technology, and for good reason: The aggregate potential cost savings for banks from AI applications is estimated at $447 billion by 2023, with the front and middle office accounting for $416 billion of that total, per Autonomous Next research seen by Business Insider Intelligence.

Most banks (80%) are highly aware of the potential benefits presented by AI, per an OpenText survey of financial services professionals. In fact, many banks are planning to deploy solutions enabled by AI: 75% of respondents at banks with over $100 billion in assets say they're currently implementing AI strategies, compared with 46% at banks with less than $100 billion in assets, per a UBS Evidence Lab report seen by Business Insider Intelligence. Certain AI use cases have already gained prominence across banks' operations, with chatbots in the front office and anti-payments fraud in the middle office the most mature.

The companies mentioned in this report are: Capital One, Citi, HSBC, JPMorgan Chase, Personetics, Quantexa, and U.S. Bank

Here are some of the key takeaways from the report:

In full, the report:

Interested in getting the full report? Here are two ways to access it:

Originally posted here:

AI IN BANKING: Artificial intelligence could be a near $450 billion opportunity for banks - here are the strat - Business Insider India

Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the U.S. Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Bob Kocher, a partner at the venture capital firm Venrock, are more blunt. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

Relaxed AI Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Jesse Ehrenfeld, who chairs the physician groups board of trustees.In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

When Good Algorithms Go Bad

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation that is not affiliated with Kaiser Permanente.

View original post here:

Artificial Intelligence Is Rushing Into Patient Care - And Could Raise Risks - Scientific American


12345...102030...