White House Wants to Use AI to Predict When People Will Become Violent

Google+ Pinterest LinkedIn Tumblr

The White House is considering a proposed program that would attempt to recognize early warning signs that could indicate a person with mental illness could become violent. Supporters of the initiative believe that it could serve as a means for addressing mass shootings without having to make changes to existing gun laws.

The mental-illness proposal, which is dubbed “Safe Home” – Stopping Aberrant Fatal Events by Helping Overcome Mental Extremes – involves using artificial intelligence (AI) to look for changes in a person’s mental state that could indicate that they might become violent.

Volunteer data would be used to discover “neurobehavioral signs” of “someone headed toward a violent explosive act.”

The proposal is part of a bigger push for the creation of the Health Advanced Research Projects Agency (HARPA). DARPA, the Pentagon’s research arm, would be used as a model for the proposed new agency.

HARPA was initially discussed in 2017, though the concept has gained momentum after the recent string of mass shootings. President Donald Trump is reportedly supportive of HARPA but would need the support of Congress to move forward as it involves the creation of a new agency.

According to a report by the Washington Post, experts say that mental illness can be a factor in violent acts, but it rarely serves as a solid predictor of future violence.

The majority of individuals who have a mental illness are not violent. Additionally, studies indicate that no more than one-quarter of mass shooters have been diagnosed with a mental illness.

Trump has previously stated that he believes mentally ill individuals are predominately responsible for mass shootings.

“We’re looking at the whole gun situation,” said Trump. I do want people to remember the words ‘mental illness.’ These people are mentally ill. . . I think we have to start building institutions again because, you know, if you look at the ’60s and ’70s, so many of these institutions were closed.”

While AI is highly capable when it comes to data analysis and pattern recognition, including when presented with massive amounts of data, it isn’t clear whether such an approach would yield beneficial results.

As scientists have previously warned, people can have different tells. By applying a single approach to the broader population, mistakes could certainly occur.

False positives – where people who are healthy are labeled as mentally ill or a potential threat – could be incredibly damaging. Similarly, false negatives – where a person is mentally ill or a potential threat but isn’t labeled as such – could also come with consequences.