Why Less Suicide Prediction on Social Media Is More

In 2017, Facebook announced it was using artificial intelligence to predict suicide. Today, its artificial intelligence scans all types of user-generated content and assigns a suicide risk score to each piece of content. If the score is particularly high, Facebook may contact police and help them locate the responsible user. Last year, I started analyzing the risks and benefits of this system in a series of articles. In October, I wrote that Facebook has revealed little about its suicide predictions, and because they are made outside the healthcare system, they are not subject to health privacy laws, principles of medical ethics, or state and federal rules governing health research. In December, I explained how contacting law enforcement based on Facebook’s predictions puts users at risk for warrantless searches of their homes, violent confrontations with police, and exacerbation of their mental health conditions. In January, I argued that Facebook should share its…

Read more detail on Recent Administrative Law posts –

Related news:

This entry was posted in Administrative law and tagged , , , , . Bookmark the permalink.

Leave a Reply