Page 141«..1020..140141142143..150160..»

Category Archives: Ai

To Bridge the AI Ethics Gap, We Must First Acknowledge It’s There – Datanami

Posted: April 9, 2021 at 2:41 am

(bookzv/Shutterstock)

Companies are adopting AI solutions at unprecedented rates, but ethical worries continue to dog the roll outs. While there are no established standards for AI ethics, a common set of guidelines is beginning to emerge to help bridge the gap between ethical principles and the AI implementations. Unfortunately, a general hesitancy to even discuss the problem could slow efforts to find a solution.

As the AI Ethics Chief for Boston Consulting Group, Steve Mills talks with a lot of companies about their ethical concerns and their ethics programs. While theyre not slowing down their AI rollouts because of ethics concerns at this time, Mills says, they are grappling with the issue and are searching for the best way to develop AI systems without violating ethical principles.

What we continue seeing here is this gap, what we started calling this the responsible AI gap, that gap from principle to action, Mills says. They want to do the right thing, but no one really knows how. There is no clear roadmap or framework of this is how you build an AI ethics program, or a responsible AI program. Folks just dont know.

As a management consulting firm, Boston Consulting Group is well positioned to help companies with this problem. Mills and his BCG colleagues have helped companies develop AI programs. Out of that experience, they recently came up with a general AI ethics program that others can use as a framework to get started.

It has six parts, including:

The most important thing a company can do to get started is to appoint somebody to be responsible for the AI ethics program, Mills says. That person can come from inside the company or outside of it, he says. Regardless, he or she will need to be able to drive the vision and strategy of ethics, but also understand the technology. Finding such a person will not be easy (indeed, just finding AI ethicists let alone executives who can take this role is no easy task).

Ultimately, youre going to need a team. Youre not going to be successful with just one person, Mills says. You need a wide diversity of skill sets. You need bundled into that group the strategists, the technologists, the ethicists, marketingall of it bundled together. Ultimately, this is really about driving a culture change.

There are a handful of companies that have taken a leadership role in paving the way forward in AI ethics. According to Mills, the software companies Microsoft, Salesforce, and Autodesk, as well as Spanish telecom Telefnica, have developed solid programs to define what AI ethics means to them and developed systems to enforce it within their companies.

And BCG of course, he says, but Im biased.

As the Principal Architect of the Ethical AI Practice at Salesforce, Kathy Baxter is one of the foremost authorities on AI ethics. Her decisions impact how Salesforce customers approach the AI ethical quandary, which in turn can impact millions of end users around the world.

So you might expect Baxter to say that Salesforces algorithms are bias-free, that they always make fair decisions, and never take into account factors based on controversial data.

You would be mistaken.

You can never say that a model is 100% bias free. Its just statistically not possible, Baxter says. If it does say that there is zero bias, youre probably overfitting your model. Instead, what we can say is that this is the type of bias that I looked for.

To prevent bias, model developers must be conscious of the specific types of bias theyre trying to prevent, Baxter says. That means, if youre looking to avoid identity bias in a sentiment analysis model, for example, then you should be on the lookout for how different terms, such as Muslim, feminist, or Christian, affect the results.

(Vitalii Vodolazskyi/Shutterstock)

Other biases to be on the lookout for are gender bias, racial bias, and accent or dialect bias, Baxter says. Emerging best-practices for AI ethics demands that practitioners devise ways to detect specific types of bias that could impact their particular AI system, and to take steps to counter those biases.

What type of bias did you look for? How did you measure it? Baxter tells Datanami. And then what was the score? What is the actual safe or acceptable threshold of bias for you to say this is good enough to be released in the world?

Baxters is a more nuanced, and practical, view of AI ethics than one might get from textbooks (if there are any on the topic yet). She seems to recognize that you should accept from the outset that bias is everywhere in human society, and that it can never be fully eradicated. But we can hopefully eliminate the worst type of biases and still enable companies and their customers to reap the rewards that AI promises in the first place.

You often hear people say, Oh we should follow the Hippocratic Oath that says do no harm, Baxter says. Well, thats not actually the true application in medical or pharmaceutical industry, because if you said no harm, there would be no medical treatment. You could never do surgery because youre doing harm to the body when youre cutting the body open. But the benefits outweigh the risks of doing nothing.

There are ethical pitfalls everywhere. For example, its not just bad form to make business decisions based on the race or ethnicity of somebodyits also illegal. But the paradox is, unless you collect data about race or ethnicity, you dont know if those factors are sneaking into the model somehow, perhaps through a proxy like ZIP Codes.

You want to be able to run a story and see, are the outcomes different based on what someones races is, or based on what someones gender is? Baxter says. If it is, thats a real problem. If you just say No, I dont even want to look at race, Im just going to completely exclude that, then its very difficult to create fairness through unawareness.

The challenge is that this is all fairly new, and nobody has a solid roadmap to follow. Salesforce is working to build processes in Einstein Discovery to help its customers model data without incorporating negative bias, but even Salesforce is flying blind to a certain extent.

Kathy Baxter, Principal Architect of the Ethical AI Practice at Salesforce

The lack of established standards and regulations is the biggest challenge in AI ethics, Baxter says. Everyone is working in kind of a sea of vagueness, she says.

She sees similarities to how the cybersecurity field developed in the 1980s. There was no security at first, and we all got hit by malware and viruses. That ultimately prompted the creation of a new discipline with new standards to guide its development. That process took years, and it will take years to hash out standards for AI ethics, she says.

Its a game of whack a mole in security. I think its going to be similar to AI, she says. Were in this period right now where were developing standards, were developing regulations and it will never be a solved problem. AI will continue evolving, and when it does, new risks will emerge and so we will always be in a practice. It will never be a solved problem, but [well continue] learning and iterating. So I do think we can get there. Were just in an uncomfortable place right now because we dont have it.

AI ethics is a new discipline, so dont expect perfection overnight. A little bit of failure isnt the end of the world, but being open enough to discuss failures is a virtue. That can be tough to do in todays volatile public environment, but its a critical ingredient to make progress, BCGs Mills says.

What I try to tell people is no one has all the answers. Its a new area. Everyone is collectively learning, he says. The best thing you can do is be open and transparent about it. I think customers appreciate that, particular if you take the stand of, We dont have all the answers. Here are the things were doing. We might get it wrong sometimes, but well be honest with you about what were doing. But I think were just not there yet. People are hesitant to have that dialog.

Related Items:

Looking For An AI Ethicist? Good Luck

Governance, Privacy, and Ethics at the Forefront of Data in 2021

AI Ethics Still In Its Infancy

Read the original:

To Bridge the AI Ethics Gap, We Must First Acknowledge It's There - Datanami

Posted in Ai | Comments Off on To Bridge the AI Ethics Gap, We Must First Acknowledge It’s There – Datanami

Google, Varian partner on AI to boost cancer radiation therapy – Mass Device

Posted: at 2:41 am

Varian (NYSE: VAR) announced today that it is working with Google Cloud to build an AI-based diagnostic platform.

The companies focus is on AI models for organ segmentation a crucial, labor-intensive step in radiation oncology that can often turn into a clinical workflow bottleneck. It involves identifying the organs and tissues in diagnostic images that must be targeted or protected during radiation therapy and can take hours per patient.

Varian is using Google Cloud AI Platforms Neural Architecture Search (NAS) technology to create an AI segmentation engine that its training using Varians proprietary treatment planning image data to create customized auto-segmentation models for organs in the body. Varian plans to incorporate the new models into its treatment planning software tools in cancer centers worldwide.

At Varian, we are working toward a world without fear of cancer, where high-quality cancer care personalized and optimized for each patient is available everywhere. To that end, we have committed ourselves to Intelligent Cancer Care, which seeks to automate routine or repetitive tasks in the radiation oncology workflow through the use of smart algorithms, machine learning and AI, said Corey Zankowski, SVP of Varians Technology and Innovation Office.

This collaboration with Google Cloud will turbocharge our efforts in this area, Zankowski said in a news release.

Go here to see the original:

Google, Varian partner on AI to boost cancer radiation therapy - Mass Device

Posted in Ai | Comments Off on Google, Varian partner on AI to boost cancer radiation therapy – Mass Device

What does AI in education look like? Here’s what research shows. – The Hechinger Report

Posted: at 2:41 am

Editors note: This story led off this weeks Future of Learning newsletter, which is delivered free to subscribers inboxes every other Wednesday with trends and top stories about education innovation. Subscribe today!

Joanna Smith, founder of an ed-tech company that helps schools curb chronic absenteeism, was thinking about how to pivot her company to provide services in a remote learning setting as many brick and mortar schools transitioned online last year.

In April 2020, her company, AllHere, launched several new features to battle problems exacerbated by Covid-19, including an Artificial Intelligence-powered two-way text messaging system, Chatbot, for kids who werent showing up to class regularly. Chatbot allows teachers to check in with families and provides 24/7 individualized AI support for struggling students. Families can also log on to the platform to get confidential health care referrals or help with computer-related issues.

AllHere isnt the only AI-powered technology startup that expanded last year. According to Stanford Universitys 2021 AI Index, more than $40 billion was invested in all AI startups in 2020. Researchers at the Digital Promise-led Center for Integrative Research in Computing and Learning Sciences (CIRCLS) believe that over the next five to 10 years, AI in the education space will see a significant growth.

A CIRCLS report, called AI and the Future of Learning, breaks down what education leaders and policy makers need to know about AI in education, and how to effectively use it to support students and teachers.

Start from what is good teaching and learning. And not from what AI can do for me.

Researchers and report co-editors Jeremy Roschelle, James Lester and Judi Fusco write that they anticipate AI will come to greatly impact teaching and learning dramatically in the coming years. They urge educators to begin planning now for how to best develop and use AI in education in ways that are equitable, ethical, and effective and to mitigate weaknesses, risks, and potential harm.

A panel of 22 experts in both AI and education convened last year to look at the strengths, weaknesses, barriers and opportunities involving AI in education, and the challenges going forward after the pandemic, said Roschelle, principal investigator at CIRCLS. The assembled group also discussed various new design concepts for how to apply AI in education.

Experts say what theyve learned this past year of school shutdowns has also forced them to think more critically about equity and biases within the ed-tech field and how learning technologies are used to address racial inequities.

Fusco, co-principal investigator at CIRCLS, said that while researchers are thinking about the future as they create tools to help schools, educators are on the ground, dealing with situations that no one anticipated and needing tools to help them [now]. CIRCLS researchers are thinking of ways to connect the two groups and consider what educators might need to know from the researchers, she said.

Roschelle added that the focus on AI ed-tech should be on supports, and tools that assist teachers.

New tools could include an AI-powered virtual teaching assistants that help teachers to grade homework and provide real-time feedback to students, or that assist teachers in orchestrating and organizing social activity in the classroom, Roschelle said. The AI tools might be rigorous performance assessments, virtual reality programs, voice- or gesture-based systems, or even robots that help kids with academic or social skills.

Whats important, according to these experts, is that, as more money pours into the AI field, companies must ensure that any tools they develop for schools are human-centered.

Start from what is good teaching and learning, Roschelle said. And not from what AI can do for me. Can we get smarter about how we take the resources weve got, and equitably enable everyone to have good teachers?

This story about AI in education was produced byThe Hechinger Report, a nonprofit, independent news organization focused oninequality and innovation in education.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

Read this article:

What does AI in education look like? Here's what research shows. - The Hechinger Report

Posted in Ai | Comments Off on What does AI in education look like? Here’s what research shows. – The Hechinger Report

TigerGraph’s Graph + AI Summit 2021 to Feature 40+ Sessions, Live Workshops and Speakers from JPMorgan Chase, NewDay, Pinterest, Jaguar Land Rover and…

Posted: at 2:41 am

REDWOOD CITY, Calif., April 08, 2021 (GLOBE NEWSWIRE) -- TigerGraph, provider of the leading graph analytics platform, today unveiled the complete agenda for Graph + AI Summit 2021, the industrys only open conference devoted to democratizing and accelerating analytics, AI and machine learning with graph algorithms. The roster includes confirmed speakers from JPMorgan Chase, Intuit, NewDay, Jaguar Land Rover, Pinterest, Stanford University, Forrester Research, Accenture, Capgemini, KPMG, Intel, Dell, and Xilinx, as well as many innovative startups including John Snow Labs, Fintell, SaH Solutions and Sayari Labs. The virtual conference, set for April 21-23, offers keynotes, speakers, real-world customer case studies and hands-on workshops for data, analytics and AI professionals.

The combination of analytics, AI, machine learning and graph is a powerful one that offers many human benefits and forward-looking companies in all industries have taken note, said Dr. Yu Xu, founder and CEO of TigerGraph. Graph + AI Summit is again bringing together industry luminaries, technical experts and business leaders from the worlds largest banks, fintechs, tech giants and manufacturers to share implementation best practices, lessons learned and more. Were pleased to welcome back speakers from Jaguar Land Rover and Intuit, and welcome new participants from an impressive list of todays top innovators driving the adoption of graph. Our goal is to make graph accessible, applicable and understandable for all, as more people grasp how graph-related technologies can improve our lives.

Graph + AI Summit returns after a successful Graph + AI 2020; the inaugural event attracted more than 3,000 attendees from 56 countries, and welcomed data scientists, data engineers, architects and business and IT executives from 115 of the Fortune 500 companies. The latest conference will host over 6,000 attendees this year and again focus on accelerating analytics, AI and machine learning with graph algorithms timely technologies that are on the minds of todays business leaders. After 2020 accelerated enterprises shift to the cloud, businesses are realizing graph technologies are key to connecting, analyzing and helping glean insights from data.

Graph + AI Summit 2021 includes keynote presentations, executive roundtables, technical breakout sessions, industry tracks (banking, insurance and fintech, healthcare, life sciences and government) and live workshops for advanced analytics and machine learning.

Keynote speakers presenting during conference general sessions include:

Notable roundtables and interactive sessions include:

Graph + AI Summit sessions will also cover the following topics:

Register for one of these live workshops for advanced analytics and machine learning now:

View Graph + AI Summits agenda: https://www.tigergraph.com/graphaisummit/#day1Register and secure your complimentary spot: https://www.tigergraph.com/graphaisummit/.

Helpful Links

About TigerGraphTigerGraph is a platform for advanced analytics and machine learning on connected data. Based on the industrys first and only distributed native graph database, TigerGraphs proven technology supports advanced analytics and machine learning applications such as fraud detection, anti-money laundering (AML), entity resolution, customer 360, recommendations, knowledge graph, cybersecurity, supply chain, IoT, and network analysis. The company is headquartered in Redwood City, California, USA. Start free with tigergraph.com/cloud.

Media ContactCathy WrightOffleash PR for TigerGraphcathy@offleashpr.com650-678-1905

See more here:

TigerGraph's Graph + AI Summit 2021 to Feature 40+ Sessions, Live Workshops and Speakers from JPMorgan Chase, NewDay, Pinterest, Jaguar Land Rover and...

Posted in Ai | Comments Off on TigerGraph’s Graph + AI Summit 2021 to Feature 40+ Sessions, Live Workshops and Speakers from JPMorgan Chase, NewDay, Pinterest, Jaguar Land Rover and…

Philips and Ibex Medical Analytics team to accelerate adoption of AI-powered digital pathology – Yahoo Finance

Posted: at 2:41 am

Philips IntelliSite Pathology Solution 4.1Philips Digital Pathology

April 8, 2021

Philips and Ibex Medical Analytics cooperate to globally commercialize clinically proven, AI-powered digital pathology solutions

Combination of Philips digital pathology solutions and Ibexs AI-powered Galen platform has improved reporting efficiency by 27%, driven 37% productivity gains, and improved consistency and accuracy to enhance diagnostic confidence

Collaboration furthers Philips commitment to integrated diagnostics providing a clear path to precision diagnosis

Amsterdam, The Netherlands and Tel Aviv, Israel Royal Philips (NYSE: PHG, AEX: PHIA), a global leader in health technology, and Ibex Medical Analytics, a pioneer in artificial intelligence (AI) based cancer diagnostics, today announced a strategic collaboration to jointly promote their digital pathology and AI solutions to hospitals, health networks and pathology labs worldwide. The combination of Philips digital pathology solution (Philips IntelliSite Pathology Solution) and Ibexs Galen AI-powered cancer diagnostics platform [1], currently in clinical use in Europe and the Middle East, empowers pathologists to generate objective, reproducible results, increase diagnostic confidence, and enable the productivity and efficiency improvements needed to cope with ever-increasing demand for pathology-based diagnostics.

Todays announcement marks the latest extension to Philips AI-enabled Precision Diagnosis solutions portfolio, which leverages Philips and third-party AI solutions to deliver cutting-edge clinical decision support and optimized workflows that enable healthcare providers to deliver on the Quadruple Aim of better patient outcomes, improved patient and staff experiences, and lower cost of care.

The trend towards centralized pathology labs, the global shortage of trained pathologists, and increasing demands on histopathology posed by the growing number of cancer patients, leads pathology labs to actively seek efficiency-enhancing solutions that enable to maintain high accuracy levels. Digital pathology, enabled by solutions such as Philips IntelliSite Pathology Solution has already been shown to improve pathology lab productivity by 25% [2], while also allowing remote image reading by specialists and the immediate sharing of images with referring hospitals as part of comprehensive pathology reports. Ibexs AI-powered Galen platform further streamlines workflow and improves accuracy via automated case prioritization, cancer heatmaps, grading and other productivity-enhancing tools.

Story continues

Building on our strong portfolio to support clinical decision-making in oncology, we bring together the power of imaging, pathology, genomics and longitudinal data with insights from artificial intelligence (AI) to help empower clinicians to deliver clear care pathways with predictable outcomes for every patient, said Kees Wesdorp, Chief Business Leader, Precision Diagnosis at Philips. By teaming with Ibex to incorporate their AI into our Digital Pathology Solutions, were further able to provide a continuous pathway, where critical patient data is made visible to both pathologists and oncologists to help improve the clinician experience and patient outcomes.

Pathology is transforming at an increasing pace and AI is one of the major drivers, supporting a more rapid and accurate cancer diagnosis, said Joseph Mossel, CEO and Co-founder of Ibex Medical Analytics. By joining forces with Philips, the leader in digital pathology deployments, we can offer new end-to-end solutions enabling pathologists to implement integrated, AI-powered workflows across a broader segment of the diagnostic pathway, improving the quality of patient care and strengthening the business case for digitization.

Ibexs Galen platform adds AI-powered cancer detection, case prioritization, grading and other productivity-enhancing insights. Users have reported significant improvements in diagnostic efficiency, with 27% reduction in time-to-diagnosis compared to conventional microscope viewing, 1- to 2-day reductions in total turnaround time, and 37% productivity gain [3]. In addition to cancer, the AI platform supports pathologists in the accurate grading, as well as detection and diagnosis of multiple clinical features, such as tumor size, perineural invasion, high-grade PIN (Prostatic Intraepithelial Neoplasia) and more. The accuracy level of Galen Prostate for cancer detection was the highest level reported in the field, with a sensitivity rate of 98.46%, specificity of 97.33% and an AUC of 0.991 [4]. When used as an automated second read, the platform alerts pathologists when discrepancies between their diagnosis and the AI algorithms findings are detected, providing a safety net against error or misdiagnosis, previously reported as high as 12% [5], and increasing overall quality of care.

We have been using Philips IntelliSite Pathology Solution together with Ibexs Galen platform as part of our routine practice since 2020, and this second read implementation has already helped us improve our diagnostic quality, said Delphine Raoux, MD, pathologist and Head of Innovation Technologies at Medipath, the largest network of private pathology labs in France. The work we presented recently showed that Ibexs AI platform can further provide significant productivity gains when used during primary diagnosis and helps us reduce total turnaround time. This is an important step forward as we look for new technologies that can help meet an increasing demand for pathology services and could enable seamless remote reading of biopsies in times of COVID restrictions.

Philips digital pathology solution is a comprehensive turnkey solution that helps to speed and simplify access to histopathology information across cancer care and beyond, supports full-scale digitization of histology in pathology labs and lab networks, and help increases workflow efficiency. At the heart is Philips IntelliSite Pathology Solution, which comprises an ultra-fast pathology slide scanner, an image management system and display [6], which includes advanced software tools to manage slide scanning, image storage, case review, and the sharing of patient information. By fully digitizing post-sample-preparation histopathology, it facilitates the streamlining of pathology workflows and enables the connectivity needed between multi-disciplinary teams and specialties when making complex cancer diagnosis and treatment decisions, from early detection and precision diagnosis through to precision treatment and predictable outcomes.

Through breakthrough innovations and partnerships, Philips integrates intelligence and automation into its Precision Diagnosis portfolio, including smart diagnostic systems, integrated workflow solutions that transform departmental operations, advanced informatics that provides diagnostic confidence, and care pathway solutions that allow medical professionals to tailor treatment to individual patients. By developing and integrating these AI-enabled applications, the company aims to enhance the ability to turn data into actionable insights and drive the right care in the right sequence at the right time.

Todays partnership announcement with Ibex follows recent AI partnership announcements with DiA Imaging Analysis for AI-powered ultrasound applications, and AI software provider Lunit, incorporating its chest detection suite into Philips diagnostic X-ray suite. These partner solutions complement Philips own AI solutions in personal health, precision diagnosis and treatment, and connected care.

[1] Galen Prostate is CE marked and approved in additional territories. Galen Prostate is not FDA approved and is for Research Use Only (RUO) in the United States. [2] Survey with 52 physicians in Europe, 2018. Results are specific to the institution where they were obtained and may not reflect the results achievable at other institutions.[3] Raoux D, et al, Novel AI Based Solution for Supporting Primary Diagnosis of Prostate Cancer Increases the Accuracy and Efficiency of Reporting in Clinical Routine https://uscap.econference.io/public/fYVk0yI/main/sessions/9644/31166%5B4%5D The LANCET Digital Health, Aug 2020, Pantanowitz et al, An artificial intelligence algorithm for prostate cancer diagnosis in whole slide images of core needle biopsies: a blinded clinical validation and deployment study https://www.thelancet.com/journals/landig/article/PIIS2589-7500(20)30159-X/fulltext%5B5%5D Laifenfeld D et al, Performance of an AI-based cancer diagnosis system in Frances largest network of pathology institutes https://ibex-ai.com/wp-content/uploads/2019/09/poster-v6-web.pdf%5B6%5D The display is part of the medical device in the United States.

For further information, please contact:

Kathy OReillyPhilips Global Press OfficeTel.: +1 978-221-8919E-mail : kathy.oreilly@philips.comTwitter: @kathyoreilly

Tal FriemanIbex Medical AnalyticsTel.: +972 50-515-2195E-mail : tal.frieman@ibex-ai.com

About Royal PhilipsRoyal Philips (NYSE: PHG, AEX: PHIA) is a leading health technology company focused on improving people's health and well-being, and enabling better outcomes across the health continuum from healthy living and prevention, to diagnosis, treatment and home care. Philips leverages advanced technology and deep clinical and consumer insights to deliver integrated solutions. Headquartered in the Netherlands, the company is a leader in diagnostic imaging, image-guided therapy, patient monitoring and health informatics, as well as in consumer health and home care. Philips generated 2020 sales of EUR 19.5 billion and employs approximately 82,000 employees with sales and services in more than 100 countries. News about Philips can be found at http://www.philips.com/newscenter.

About Ibex Medical AnalyticsIbex uses AI to develop clinical-grade solutions that help pathologists detect and grade cancer in biopsies. The Galen platform is the first-ever AI-powered cancer diagnostics solution in routine clinical use in pathology and deployed worldwide, empowering pathologists to improve diagnostic accuracy, integrate comprehensive quality control and enable more efficient workflows. Ibexs solutions are built on Deep Learning algorithms trained by a team of pathologists, data scientists and software engineers. For more information, go to http://www.ibex-ai.com

Attachments

See the rest here:

Philips and Ibex Medical Analytics team to accelerate adoption of AI-powered digital pathology - Yahoo Finance

Posted in Ai | Comments Off on Philips and Ibex Medical Analytics team to accelerate adoption of AI-powered digital pathology – Yahoo Finance

Discover the stupidity of AI emotion recognition with this little browser game – The Verge

Posted: at 2:41 am

Tech companies dont just want to identify you using facial recognition they also want to read your emotions with the help of AI. For many scientists, though, claims about computers ability to understand emotion are fundamentally flawed, and a little in-browser web game built by researchers from the University of Cambridge aims to show why.

Head over to emojify.info, and you can see how your emotions are read by your computer via your webcam. The game will challenge you to produce six different emotions (happiness, sadness, fear, surprise, disgust, and anger), which the AI will attempt to identify. However, youll probably find that the softwares readings are far from accurate, often interpreting even exaggerated expressions as neutral. And even when you do produce a smile that convinces your computer that youre happy, youll know you were faking it.

This is the point of the site, says creator Alexa Hagerty, a researcher at the University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk: to demonstrate that the basic premise underlying much emotion recognition tech, that facial movements are intrinsically linked to changes in feeling, is flawed.

The premise of these technologies is that our faces and inner feelings are correlated in a very predictable way, Hagerty tells The Verge. If I smile, Im happy. If I frown, Im angry. But the APA did this big review of the evidence in 2019, and they found that peoples emotional space cannot be readily inferred from their facial movements. In the game, says Hagerty, you have a chance to move your face rapidly to impersonate six different emotions, but the point is you didnt inwardly feel six different things, one after the other in a row.

A second mini-game on the site drives home this point by asking users to identify the difference between a wink and a blink something machines cannot do. You can close your eyes, and it can be an involuntary action or its a meaningful gesture, says Hagerty.

Despite these problems, emotion recognition technology is rapidly gaining traction, with companies promising that such systems can be used to vet job candidates (giving them an employability score), spot would-be terrorists, or assess whether commercial drivers are sleepy or drowsy. (Amazon is even deploying similar technology in its own vans.)

Of course, human beings also make mistakes when we read emotions on peoples faces, but handing over this job to machines comes with specific disadvantages. For one, machines cant read other social clues like humans can (as with the wink / blink dichotomy). Machines also often make automated decisions that humans cant question and can conduct surveillance at a mass scale without our awareness. Plus, as with facial recognition systems, emotion detection AI is often racially biased, more frequently assessing the faces of Black people as showing negative emotions, for example. All these factors make AI emotion detection much more troubling than humans ability to read others feelings.

The dangers are multiple, says Hagerty. With human miscommunication, we have many options for correcting that. But once youre automating something or the reading is done without your knowledge or extent, those options are gone.

Continued here:

Discover the stupidity of AI emotion recognition with this little browser game - The Verge

Posted in Ai | Comments Off on Discover the stupidity of AI emotion recognition with this little browser game – The Verge

Four ways to stay ahead of the AI fraud curve | SC Media – SC Magazine

Posted: at 2:41 am

View Post

As organizations have adopted AI to minimize their attack surface and thwart fraud, cybercriminals also use AI to automate their attacks on a massive scale. The new virtual world driven by the COVID-19 pandemic has given bad actors the perfect opportunity to access consumer accounts by leveraging AI and bots to commit fraud like never before.

In todays AI arms race, companies try to stay ahead of the attack curve, while criminals aim to overtake it, using it to their advantage. Here are four AI attack vectors all security pros should know about and ways to combat each of them:

Deepfakes superimpose existing video footage or photographs of a face onto a source head and body using advanced neural-network-powered AI. Deepfakes are relatively easy to create and often make fraudulent video and audio content appear incredibly real. Deepfakes have become increasingly harder to spot as criminals use more sophisticated techniques to trick their victims. In fact, Gartner predicts that deepfakes will account for 20 percent of successful account takeover attacks by 2023, which results in cybercriminals gaining access to user accounts and locking the legitimate user out.

Unfortunately, bad actors will weaponize deepfake technology for fraud as biometric-based authentication solutions are widely adopted. Even more of a concern, many digital identity verification products are unable to detect and prevent deepfakes, bots and sophisticated spoofing attacks. Organizations must make sure any identity verification product they implement has the sophistication in place to identify and stop deepfake attacks.

As digital transformation accelerates amid the COVID-19 pandemic, fraudsters are leveraging machine learning (ML) to accelerate attacks on networks and systems, using AI to identify and exploit security gaps. While AI increasingly gets used to automate repetitive tasks, improve security and identify vulnerabilities, hackers will in turn build their own ML tools to target these processes. As cybercriminals take advantage of new technologies faster than security defenses can combat them, its critical for enterprises to secure ML systems and implement AI-powered solutions to recognize and halt attacks.

Gartner reports that through 2022, 30 percent of all AI cyberattacks will leverage training-data poisoning, AI model theft or adversarial samples to attack AI-powered systems. These attacks manipulate an AI system to alter its behavior which may result in widespread and damaging repercussions because AI has become a core component in critical systems across all industries. Cybercriminals have found new ways to pinpoint inherent limitations in the AI algorithms, such as changing how data gets classified and where its stored. These attacks on AI will ultimately make it challenging to trust the technology to perform its intended function. For example, AI attacks could hinder an autonomous vehicles ability to recognize hazards or prevent an AI-powered content filter from removing inappropriate images. Enterprises must implement standards for how AI applications are trained, secured and managed to avoid system hacks.

AI lets cybercriminals execute spearphishing attacks by finding personal information, determining user activity on social platforms and analyzing a victims tone of writing, such as how they communicate with colleagues and friends. Cybercriminals can then use this data to make their emails convincing. For example, automated targeted emails may sound like they came from a trusted colleague or relate to an event a user expressed interest in, making the victim likely to respond or click on a link which downloads malicious software that lets a criminal steal a victims username and password. In addition to educating users about phishing emails, organizations must secure their networks with strong authentication to ensure hackers cant use stolen credentials to pose as a trusted user or bypass spam filters to reach user inboxes.

As enterprises escalate their AI strategy to succeed amid the continued COVID-19 pandemic, they must understand that fraudsters are also escalating their strategies to outsmart new AI technologies and commit cybercrime. By implementing strong authentication and securing AI systems effectively, enterprises can combat the growing threat of AI attacks, ultimately keeping customer accounts secure and AI systems executing their intended purpose.

Robert Prigge, chief executive officer, Jumio

More:

Four ways to stay ahead of the AI fraud curve | SC Media - SC Magazine

Posted in Ai | Comments Off on Four ways to stay ahead of the AI fraud curve | SC Media – SC Magazine

Health care leads the way for top private AI firms – Axios

Posted: at 2:40 am

A new list of the top 100 private AI companies shows that health is driving investment in the industry.

Why it matters: COVID-19 has shown the power and potential of AI applications for health, and the growth of the field will continue long after the pandemic has finally ended.

What's happening: Wednesday morning, the business research firm CB Insights released its annual AI 100 ranking of the most promising private companies working in artificial intelligence.

What they're saying: "The list is a testament to not just the breadth of AIs impact but also the depth of automation within industries," says Deepashri Varadharajan, lead analyst for emerging tech at CB Insights.

Details: AI companies focused on health care claimed the most spots on the list with eight, including some working on surgical technology, clinical trials and even dental insurance.

The bottom line: One of the biggest takeaways of the AI 100 ranking is the way artificial intelligence has moved from the digital realm into the physical world and there's nothing more physical than our health.

Go deeper: Coronavirus accelerates AI in health care

Go here to see the original:

Health care leads the way for top private AI firms - Axios

Posted in Ai | Comments Off on Health care leads the way for top private AI firms – Axios

How Can Government Attract the AI Talent It Needs? – The Wall Street Journal

Posted: at 2:40 am

Artificial intelligence has the potential to transform government, from how it spends taxpayer money and delivers services to how it protects the public and fights wars.

But it cant do that without a key ingredient: talent.

The problem is that the government isnt on the cutting edge of tech talent, in part because it has to compete with the private sector, where the payoff is so much greater.

So, how does Washington attractand keepthe people it needs to develop this new technology?

The Wall Street Journal asked three experts to debate the issue. Yll Bajraktari is executive director of the National Security Commission on Artificial Intelligence, a group of industry executives and academics that has studied the governments AI needs. Martial Hebert is dean of the Carnegie Mellon University School of Computer Science. Megan McConnell is a McKinsey & Co. partner who advises public-sector organizations on human-capital management, with a focus on AI.

Read this article:

How Can Government Attract the AI Talent It Needs? - The Wall Street Journal

Posted in Ai | Comments Off on How Can Government Attract the AI Talent It Needs? – The Wall Street Journal

Future Role of Artificial Intelligence in Logistics and Transportation – IoT For All

Posted: at 2:40 am

As logistics and freight organizations become more digitized, enterprises will be able to collect increasing amounts of data surrounding their customers, supply chain, deliveries, fleet, drivers, and more.Leading logistics organizations are already harnessing Artificial Intelligence (AI) in transportation. While a lot of enterprises currently collect this data, which will only continue to increase in the future, this data is still massively underutilized.

Using the power of AI, enterprises can unlock advanced route planning that optimizes several real-world factors in a way thats difficult or impossible for traditional route planning to do.

Traditional route planning factors in transportation can typically only incorporate a few factors, which are still very naive rule-based factors. However, traditional ways cant just be replaced overnight. The entire procedure of adapting to a new technology requires time and skills to be acquired.

To enable efficient route planning with AI, enterprises need to account for a wide variety of factors. Factors include the type that is to be delivered, customer preferences, traffic patterns, local road regulations, and changing routing behaviors in addition to subjective factors such as local knowledge of delivery personnel and other preferences.

With predictive analytics, an AI-powered system can optimize real-world factors for route planning that results in a lower cost of deliveries, faster delivery times, reduced shipping costs, and better asset utilization. Predictive analytics use data, statistical algorithms, and machine learning to identify the likelihood of future outcomes based on historical data.

In the future, AI-based systems will help unlock the true potential power of enterprise data. This will enable better customer experiences, improved fleet management, faster deliveries, lower safety incidents, and better overall business margins. AI enables a win-win scenario for all stakeholders in the logistics transportation ecosystem but requires some effort and investment to build and maintain.

As important as AI is, an underrated component of AI is data and data engineering. Data engineering is the aspect of data science that focuses on practical applications of data collection and analysis. Before jumping onto the AI hype train, ask yourself, are you collecting critical data about your business operations? Is the data effectively stored, organized, and easily accessible?

At the end of the day, while AI is currently a trending tech buzzword, its only useful to solve an actual business problem. Assess what problems you want AI-based systems to solve, adopt them into your business goals, and use the proper metrics to measure efficiencies.

More here:

Future Role of Artificial Intelligence in Logistics and Transportation - IoT For All

Posted in Ai | Comments Off on Future Role of Artificial Intelligence in Logistics and Transportation – IoT For All

Page 141«..1020..140141142143..150160..»