Facebook is using AI to identify suicidal thoughts — but it’s not … – Fox News

For many of its nearly 2 billion users, Facebook is the primary channel of communication, a place where they can share their thoughts, post pictures and discuss every imaginable topic of interest.

Including suicide.

Six years ago, Facebook posted a page offering advice on how to help people who post suicidal thoughts on the social network. But in the year since it made its live-streaming feature, Facebook Live, available to all users, Facebook has seen some people use its technology to let the world watch them kill themselves.

TOO MUCH SOCIAL MEDIA USE LINKED TO FEELINGS OF ISOLATION

After at least three users committed suicide on Facebook Live late last year, the companys chairman and CEO, Mark Zuckerberg, addressed the issue in the official company manifesto he posted in February:

"To prevent harm, we can build social infrastructure to help our community identify problems before they happen. When someone is thinking of suicide or hurting themselves, we've built infrastructure to give their friends and community tools that could save their life.

There are billions of posts, comments and messages across our services each day, and since it's impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events like suicides, some live streamed that perhaps could have been prevented if someone had realized what was happening and reported them sooner. These stories show we must find a way to do more."

Now, in its effort to do more, the company is using artificial intelligence and pattern recognition to identify suicidal thoughts in posts and live streams and to flag those posts for a team that can follow up, typically via Facebook Messenger.

FACEBOOK REPORTS JOURNALISTS TO THE COPS FOR REPORTING CHILD PORN TO FACEBOOK

Were testing pattern recognition to identify posts as very likely to include thoughts of suicide, product manager Vanessa Callison-Burch, researcher Jennifer Guadagno and head of global safety Antigone Davis wrote in a blog post.

Our Community Operations team will review these posts and, if appropriate, provide resources to the person who posted the content, even if someone on Facebook has not reported it yet.

Using artificial intelligence and pattern recognition, Facebook will monitor millions of posts to identify common behaviors among potential suicides, something a human intervention expert could never do.

FACEBOOK ADDS SUICIDE-PREVENTION TOOLS FOR LIVE VIDEO

But it still doesnt go far enough, some experts say.

Cheryl Karp Eskin, program director at Teen Line, said using artificial intelligence (AI) to identify patterns holds great promise in detecting expressions of suicidal thoughts but it wont necessarily decrease the number of suicides.

There has been very little progress in preventing suicides in the last 50 years. Suicide is the second leading cause of death among 15- to 29-year-olds, and the rate in that age group continues to rise.

Eskin expressed concerns that the technology might wrongly flag posts, or that users might hide their feelings if they knew a machine learning algorithm was watching them.

A TECHNICAL GLITCH LEFT SOME FACEBOOK USERS LOCKED OUT OF THEIR ACCOUNTS

AI is not a substitute for human interaction, as there are many nuances of speech and expression that a machine may not understand, she said. There are people who are dark and deep, but not suicidal. I also worry that people will shut down if they are identified incorrectly and not share some of their feelings in the future.

Joel Selanikio, MD, an assistant professor at Georgetown University who started the AI-powered company Magpi, said Facebook has a large data set of users, which helps AI parse language constantly and enables it to work more effectively.

But even if AI helps Facebook identify suicidal thoughts, that doesnt mean it can help determine the best approach for prevention.

FIFTH-GRADER HITS POLICE FACEBOOK SITE FOR EMERGENCY HOMEWORK HELP

Right now, Selanikio said, my understanding is that it just tells the suicidal person to seek help. I can imagine other situations, for example in the case of a minor, where the system notifies the parents. Or in the case of someone under psychiatric care, this might alert the clinician.

Added Wendy Whitsett, a licensed counselor, I would like to learn more about the plan for follow-up support, after the crisis had ended, and helping the user obtain services and various levels of support utilizing professional and peer support, as well as support from friends, neighbors, pastors, and others.

I am also interested to know if the algorithms are able to detect significant life events that would indicate increased risk factors and offer assistance with early intervention.

Technology has moved from offering assistance to people who view others suicidal posts to using artificial intelligence and pattern recognition to track and flag the posts automatically. But that, the experts say, is just the beginning. Facebook still has a long way to go.

Next, they hope, Facebook will be able to use AI to predict behavior and intervene in real-time to help those in need.

Read the original post:

Facebook is using AI to identify suicidal thoughts -- but it's not ... - Fox News

Related Posts

Comments are closed.