Technology

Facebook Unveils AI to Help in the Fight to Prevent Suicide

Google+ Pinterest LinkedIn Tumblr

A new “proactive detection” artificial intelligence (AI) technology designed by Facebook scans posts on the social media platform to identify patterns that may indicate suicidal thoughts or actions, allowing Facebook to respond with mental health resource information or contact first-responders. The AI recognizes concerning posts and directs them to a human moderator, allowing them to intervene ahead of user reports.

Since the new AI technology actively scans Facebook posts and doesn’t rely on user reports, which are only submitted once a person views a post and considers it concerning, assistance can be offered more quickly than before.

Tests on the new software, along with a more prominent placement of suicide reporting options, were originally conducted in the US. The technology is now set to operate worldwide, except for in the European Union, as local privacy laws limit the use of the AI.

The AI is also designed to review user reports of worrisome behavior and triage the information, ensuring posts that appear particularly urgent are sent to a moderator ahead of others. Instantly accessible information is also provided to the user and anyone who reports potentially suicidal behavior, including links to resources and information about how to contact first responders in an emergency.

Facebook used previous user reports to help train the AI, giving it valuable data to help identify patterns in the words used or imagery posted. The AI was also taught to screen the comments for signs of concern, such as users asking “Do you need help?” or “Are you OK?”

To support the efforts, Facebook is dedicating a larger percentage of their moderators to suicide prevention, receiving additional training to deal with the cases. Facebook’s partnership with suicide prevention organizations like the National Suicide Prevention Hotline, Save.org, and Forefront ensure at-risk users can receive access to comprehensive resources quickly.

During the one-month testing phase, Facebook initiated more than 100 “wellness checks,” with first responders being sent to the user’s location.

According to a report by TechCrunch, while the technology will work across all of Facebook, Facebook Live became a focus after a father killed himself while broadcasting. The AI will work to proactively identify worrisome content and flag it for review by trained moderators while Facebook also makes user reporting options more accessible.